首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article is designed to point out the close connection between recursive estimation procedures, such as Kalman filter theory, familiar to control engineers, and linear least squares estimators and estimators that include prior information in the form of linear restrictions, such as mixed estimators and ridge estimators, familiar to statisticians. The only difference between the two points of view seems to be a difference in terminology. To demonstrate this point, it is shown how the Kalman filter equations can be derived from an existing textbook account of linear least squares theory and the notion of combining prior information in linear models, that is, the Goldberger—Theil mixed estimators' point of view. The author advocates the inclusion of these ideas early when least squares estimation concepts are being taught.  相似文献   

2.
Document classification is an area of great importance for which many classification methods have been developed. However, most of these methods cannot generate time-dependent classification rules. Thus, they are not the best choices for problems with time-varying structures. To address this problem, we propose a varying naïve Bayes model, which is a natural extension of the naïve Bayes model that allows for time-dependent classification rule. The method of kernel smoothing is developed for parameter estimation and a BIC-type criterion is invented for feature selection. Asymptotic theory is developed and numerical studies are conducted. Finally, the proposed method is demonstrated on a real dataset, which was generated by the Mayor Public Hotline of Changchun, the capital city of Jilin Province in Northeast China.  相似文献   

3.
In this paper, three competing survival function estimators are compared under the assumptions of the so-called Koziol– Green model, which is a simple model of informative random censoring. It is shown that the model specific estimators of Ebrahimi and Abdushukurov, Cheng, and Lin are asymptotically equivalent. Further, exact expressions for the (noncentral) moments of these estimators are given, and their biases are analytically compared with the bias of the familiar Kaplan–Meier estimator. Finally, MSE comparisons of the three estimators are given for some selected rates of censoring.  相似文献   

4.
In this article, we introduce a new class of estimators called the sK type principal components estimators to combat multicollinearity, which include the principal components regression (PCR) estimator, the rk estimator and the sK estimator as special cases. Necessary and sufficient conditions for the superiority of the new estimator over the PCR estimator, the rk estimator and the sK estimator are derived in the sense of the mean squared error matrix criterion. A Monte Carlo simulation study and a numerical example are given to illustrate the performance of the proposed estimator.  相似文献   

5.
6.
7.
The size and power properties of the Cox–Stuart test for detection of a monotonic deterministic trend in hydrological time series are analyzed using the Monte Carlo method. The influence of distribution properties, lengths of series, and trend slopes is studied. Results indicate good size in all cases. The power is high for: length over 60 and strong trend slope, low or medium variation, and medium slope. The power declines if slope and length decrease and if variability increases. The properties are better for skewed distributions than for symmetrical. The test is slightly weaker in comparison to the Mann–Kendall test.  相似文献   

8.
The analysis of data using a stable probability distribution with tail parameter α<2 (sometimes called a Pareto–Levy distribution) seems to have been avoided in the past in part because of the lack of a significance test for the mean, even though it appears to be the correct distribution to use for describing returns in the financial markets. A z test for the significance of the mean of a stable distribution with tail parameter 1<α≤2 is defined. Tables are calculated and displayed for the 5% and 1% significance levels for a range of tail and skew parameters α and β. Through the use of maximum likelihood estimates, the test becomes a practical tool even when α and β are not that accurately determined. As an example, the z test is applied to the daily closing prices for the Dow Jones Industrial average from 2 January 1940 to 19 March 2010.  相似文献   

9.
The proportion of triangles in a Poisson – Voronoi tessellation has been recently represented as a five-fold integral. Here we give a simpler representation, reduce it to a fourfold integral and discuss its numerical evaluation.  相似文献   

10.
It is shown that the exact null distribution of the likelihood ratio criterion for sphericity test in the p-variate normal case and the marginal distribution of the first component of a (p ? 1)-variate generalized Dirichlet model with a given set of parameters are identical. The exact distribution of the likelihood ratio criterion so obtained has a general format for every p. A novel idea is introduced here through which the complicated exact null distribution of the sphericity test criterion in multivariate statistical analysis is converted into an easily tractable marginal density in a generalized Dirichlet model. It provides a direct and easiest method of computation of p-values. The computation of p-values and a table of critical points corresponding to p = 3 and 4 are also presented.  相似文献   

11.
In this paper, we identified risk factors for chronic obstructive pulmonary disease (COPD) and proposed a nomogram for COPD. Data were from the 6th Korean National Health and Nutrition Examination Survey (2013–2015). First, a chi-square test was performed to identify risk factors about incidence of COPD. A nomogram was then constructed using the naïve Bayesian classifier model in order to visualize risk factors of COPD. The nomogram shows that asthma had the strongest effect on COPD incidence. We additionally compared Bayesian nomogram with logistic regression model nomogram. Finally, a ROC curve and calibration plot were used to assess the nomogram.  相似文献   

12.
13.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

14.
This article investigates the effect of estimation of unknown degrees of freedom on efficient estimation of remaining parameters in Spanos’ conditional t heteroskedastic model. We compare by simulation three maximum likelihood estimators (MLEs) of the remaining parameters in the model: the MLE of the remaining parameters when all the parameters are estimated by the MLE, when the degrees of freedom is estimated by a method of moments estimator, and when the degrees of freedom is erroneously specified. The latter two methods are found to perform poorly compared to the former method for the inference of variance parameters in the model. Thus, efficient estimation of degrees of freedom by the MLE is important to estimate efficiently the remaining variance parameters.  相似文献   

15.
In the identity of exchange I distinguish between currency and bank payments on one side and several types of transactions and the transfer of idle money on the other. An attempt is made to measure these variables, with varying success. On the payments side I argue that currency velocity is constant (and low) and that the vast rise of bank money velocity is largely due to increased short-term investment of idle funds. The results suggest an upward shift in the level of transactions in 1968–1972, which I attribute to changes in the international role of the dollar.  相似文献   

16.
In mixed models the mean square error (MSE) of empirical best linear unbiased estimators generally cannot be written in closed form. Unlike traditional methods of inference, parametric bootstrapping does not require approximation of this MSE or the test statistic distribution. Data were simulated to compare coverage rates for intervals based on the naïve MSE approximation and the method of Kenward and Roger, and parametric bootstrap intervals (Efron's percentile, Hall's percentile, bootstrap-t). The Kenward–Roger method performed best and the bootstrap-t almost as well. Intervals were also compared for a small set of real data. Implications for minimum sample size are discussed.  相似文献   

17.
18.
The power of the Fisher permutation test extended to 2 × k tables is evaluated unconditionally as a function of the under-lying cell probabilities in the table. These results are then applied in assessing the sensitivity of two-generation cancer bioassays in which a fixed number of pups from each litter born in the first generation are selected to continue on test in the second generation. In this case, the two rows of the table correspond to two treatment groups and the k columns correspond to the number of animals responding in a litter. The cell probabilities in this application are based on a suitable beta-binomial superpopulation model.  相似文献   

19.
20.
ABSTRACT

Let us consider that the variance function or its νth derivative in a regression model has a change/discontinuity point at an unknown location. To use the local polynomial fits, the log-variance function which break the positivity is targeted. The location and the jump size of the change point are estimated based on a one-sided kernel-weighted local-likelihood function which is provided by the χ2-distribution. The whole structure of the log-variance function is then estimated using the data sets split by the estimated location. Asymptotic results of the proposed estimators are described. Numerical works demonstrate the performances of the methods with simulated and real examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号