首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classification and regression tree has been useful in medical research to construct algorithms for disease diagnosis or prognostic prediction. Jin et al. 7 Jin, H., Lu, Y., Harris, R. T., Black, D., Stone, K., Hochberg, M. and Genant, H. 2004. Classification algorithms for hip fracture prediction base on recursive partitioning methods. Med. Decis. Mak., 24: 386398. (doi:10.1177/0272989X04267009)[Crossref], [PubMed], [Web of Science ®] [Google Scholar] developed a robust and cost-saving tree (RACT) algorithm with application in classification of hip fracture risk after 5-year follow-up based on the data from the Study of Osteoporotic Fractures (SOF). Although conventional recursive partitioning algorithms have been well developed, they still have some limitations. Binary splits may generate a big tree with many layers, but trinary splits may produce too many nodes. In this paper, we propose a classification approach combining trinary splits and binary splits to generate a trinary–binary tree. A new non-inferiority test of entropy is used to select the binary or trinary splits. We apply the modified method in SOF to construct a trinary–binary classification rule for predicting risk of osteoporotic hip fracture. Our new classification tree has good statistical utility: it is statistically non-inferior to the optimum binary tree and the RACT based on the testing sample and is also cost-saving. It may be useful in clinical applications: femoral neck bone mineral density, age, height loss and weight gain since age 25 can identify subjects with elevated 5-year hip fracture risk without loss of statistical efficiency.  相似文献   

2.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

3.
A linear recursive technique that does not use the Kalman filter approach is proposed to estimate missing observations in an univariate time series. It is assumed that the series follows an invertible ARIMA model. The procedure is based on the restricted forecasting approach, and the recursive linear estimators are optimal in terms of minimum mean-square error.  相似文献   

4.
In this paper, a new sequential acceptance sampling plans in the presence of inspection errors is developed. A suitable profit objective function is employed for optimizing the lot sentencing problem. A backward recursive approach is applied for obtaining the profit of different decisions in each stage of sampling. Required probabilities are obtained using Bayesian rule. A case study is solved for illustrating the application of proposed models and sensitivity analysis are carried out on the parameters of the proposed methodologies and the behaviour of models by changing the parameters are investigated.  相似文献   

5.
Clustering is a common and important issue, and finite mixture models based on the normal distribution are frequently used to address the problem. In this article, we consider a classification model and build a mixture model around it. A good assessment of the allocation of observations and number of clusters is easily obtained from this approach.  相似文献   

6.
The kernel function method developed by Yamato (1971) to estimate a probability density function essentially is a way of smoothing the empirical distribution function. This paper shows how one can generalize this method to estimate signals for a semimartingale model. A recursive convolution smoothed estimate is used to obtain an absolutely continuous estimate for an absolutely continuous signal of a semimartingale model. It is also shown that the estimator obtained has a smaller asymptotic variance than the one obtained in Thavaneswaran (1988).  相似文献   

7.
This paper proposes a new approach based on two explicit rules of Mendel experiments and Mendel's population genetics for the genetic algorithm (GA). These rules are the segregation and independent assortment of alleles, respectively. This new approach has been simulated for the optimization of certain test functions. The doctrinal sense of GA is conceptually improved by this approach using a Mendelian framework. The new approach is different than the conventional one in terms of crossover, recombination, and mutation operators. The results obtained here are in agreement with those of the conventional GA, and even better in some cases. These results suggest that the new approach is overall more sensitive and accurate than the conventional one. Possible ways of improving the approach by including more genetic formulae in the code are also discussed.  相似文献   

8.
ABSTRACT

Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much service data come from a process with variables having non-normal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, should not be properly used in such circumstances. In this paper, we propose a new variance chart based on a simple statistic to monitor process variance shifts. We explore the sampling properties of the new monitoring statistic and calculate the average run lengths (ARLs) of the proposed variance chart. Furthermore, an arcsine transformed exponentially weighted moving average (EWMA) chart is proposed because the ARLs of this modified chart are more intuitive and reasonable than those of the variance chart. We compare the out-of-control variance detection performance of the proposed variance chart with that of the non-parametric Mood variance (NP-M) chart with runs rules, developed by Zombade and Ghute [Nonparametric control chart for variability using runs rules. Experiment. 2014;24(4):1683–1691], and the nonparametric likelihood ratio-based distribution-free exponential weighted moving average (NLE) chart and the combination of traditional exponential weighted moving average (EWMA) mean and EWMA variance (CEW) control chart proposed by Zou and Tsung [Likelihood ratio-based distribution-free EWMA control charts. J Qual Technol. 2010;42(2):174–196] by considering cases in which the critical quality characteristic has a normal, a double exponential or a uniform distribution. Comparison results showed that the proposed chart performs better than the NP-M with runs rules, and the NLE and CEW control charts. A numerical example of service times with a right-skewed distribution from a service system of a bank branch in Taiwan is used to illustrate the application of the proposed variance chart and of the arcsine transformed EWMA chart and to compare them with three existing variance (or standard deviation) charts. The proposed charts show better detection performance than those three existing variance charts in monitoring and detecting shifts in the process variance.  相似文献   

9.
We propose a simple hybrid method which makes use of both saddlepoint and importance sampling techniques to approximate the bootstrap tail probability of an M-estimator. The method does not rely on explicit formula of the Lugannani-Rice type, and is computationally more efficient than both uniform bootstrap sampling and importance resampling suggested in earlier literature. The method is also applied to construct confidence intervals for smooth functions of M-estimands.  相似文献   

10.
Following the developments in DasGupta et al. (2000), the authors propose and explore a new method for constructing proper default priors and a method for selecting a Bayes estimate from a family. Their results are based on asymptotic expansions of certain marginal correlations. For ease of exposition, most results are presented for location families and squared error loss only. The default prior methodology amounts, ultimately, to the minimization of Fisher information, and hence, Bickel's prior works out as the default prior if the location parameter is bounded. As for the selected Bayes estimate, it corresponds to ‘Gaussian tilting’ of an initial reference prior.  相似文献   

11.
In this paper, we propose a new estimation method for binary quantile regression and variable selection which can be implemented by an iteratively reweighted least square approach. In contrast to existing approaches, this method is computationally simple, guaranteed to converge to a unique solution and implemented with standard software packages. We demonstrate our methods using Monte-Carlo experiments and then we apply the proposed method to the widely used work trip mode choice dataset. The results indicate that the proposed estimators work well in finite samples.  相似文献   

12.
13.
In this paper asymptotic sequential fixed-width confidence bounds for an unknown density on the real line, based on integrated squared error, are studied. Using a sequence of Wolverton-Wagner kernel estimators, two classes of stopping rules are established.By the same approach, analogous results can be provided for other types of recursive density estimators.  相似文献   

14.
AStA Advances in Statistical Analysis - Standard Poisson and negative binomial truncated regression models for count data include the regressors in the mean of the non-truncated distribution. In...  相似文献   

15.
The current approach to the estimation of shelf‐life and the determination of the label shelf‐life as detailed in the International Conference on Harmonisation guidelines to industry is presented. The shortcomings of the status quo are explained and a possible solution is offered, which gives rise to a new definition of shelf‐life. Several methods for calculating a label shelf‐life are presented and investigated using a simulation study. Recommendations to adopt the new definition and to increase sample sizes are made. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
Summary.  For rare diseases the observed disease count may exhibit extra Poisson variability, particularly in areas with low or sparse populations. Hence the variance of the estimates of disease risk, the standardized mortality ratios, may be highly unstable. This overdispersion must be taken into account otherwise subsequent maps based on standardized mortality ratios will be misleading and, rather than displaying the true spatial pattern of disease risk, the most extreme values will be highlighted. Neighbouring areas tend to exhibit spatial correlation as they may share more similarities than non-neighbouring areas. The need to address overdispersion and spatial correlation has led to the proposal of Bayesian approaches for smoothing estimates of disease risk. We propose a new model for investigating the spatial variation of disease risks in conjunction with an alternative specification for estimates of disease risk in geographical areas—the multivariate Poisson–gamma model. The main advantages of this new model lie in its simplicity and ability to account naturally for overdispersion and spatial auto-correlation. Exact expressions for important quantities such as expectations, variances and covariances can be easily derived.  相似文献   

17.
The scope of exact analytical results in Bayesian econometrics is known to be quite limited. It is, however, shown here to be broader than the simple natural-conjugare framework. Restricting the coefficients of a SURE model in a recursive linear way can not be accommodated in a natural-conjugate analysis,but still allows for analytical ingerence, exploiting the recursive characteristics over equations. These finding are used to obtain analytical posterior results in a two-equation model for money and interest rate in the UK. Subsequent research shows that such methods can substantially increase both reliability and efficiency in the analysis of more complicated models than the ine under scrutiny here.  相似文献   

18.
The scope of exact analytical results in Bayesian econometrics is known to be quite limited. It is, however, shown here to be broader than the simple natural-conjugare framework. Restricting the coefficients of a SURE model in a recursive linear way can not be accommodated in a natural-conjugate analysis,but still allows for analytical ingerence, exploiting the recursive characteristics over equations. These finding are used to obtain analytical posterior results in a two-equation model for money and interest rate in the UK. Subsequent research shows that such methods can substantially increase both reliability and efficiency in the analysis of more complicated models than the ine under scrutiny here.  相似文献   

19.
This paper proposes a new test statistic based on the computational approach test (CAT) for one-way analysis of variance (ANOVA) under heteroscedasticity. The proposed test was compared with other popular tests according to type I error and power of tests under different combinations of variances, means, number of groups and sample sizes. As a result, it was observed that the proposed test yields better results than other tests in many cases.  相似文献   

20.
Closed form expressions are developed for the estimators of functions of the variance components in balanced, mixed, linear models. These estimators are averages of sample covariances (variances) which offer diagnostic information on the data and the model. The cause of negative estimates may be revealed. Examples illustrate the basic concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号