共查询到20条相似文献,搜索用时 78 毫秒
1.
A non-linear mixed-effects model to predict cumulative bole volume of standing trees 总被引:1,自引:0,他引:1
For purposes of forest inventory and eventual management of the forest resource, it is essential to be able to predict the cumulative bole volume to any stipulated point on the standing tree bole, while requiring measurements of tree size that can be made easily, quickly and accurately. Equations for this purpose are typically non-linear and are fitted to data garnered from a sample of felled trees. Because the cumulative bole volume of each tree is measured to numerous upper-bole locations, correlations between measurements within a tree are likely. A mixed-effects model is fitted to account for this within-subject (tree) correlation structure, while also portraying the sigmoidal shape of the cumulative bole volume profile. 相似文献
2.
Tapio Nummi Jyrki Möttönen 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(3):495-505
Summary. In a modern computer-based forest harvester, tree stems are run in sequence through the measuring equipment root end first, and simultaneously the length and diameter are stored in a computer. These measurements may be utilized for example in the determination of the optimal cutting points of the stems. However, a problem that is often passed over is that these variables are usually measured with error. We consider estimation and prediction of stem curves when the length and diameter measurements are subject to errors. It is shown that only in the simplest case of a first-order model can the estimation be carried out unbiasedly by using standard least squares procedures. However, both the first- and the second-degree models are unbiased in prediction. Also a study on real stem is used to illustrate the models that are discussed. 相似文献
3.
Guy Cafri Luo Li Elizabeth W. Paxton Juanjuan Fan 《Journal of applied statistics》2018,45(12):2279-2294
Estimation of person-specific risk for adverse health events in medicine has been approached almost exclusively using parametric statistical methods. Random forest is a machine learning method based on tree ensembles that is completely nonparametric and for this reason may be better suited for risk prediction. An introduction to a random forest is provided with a focus on its application to risk prediction. Using data from a total joint replacement registry, we illustrate risk prediction for the binary outcome of 90-day mortality following implantation, as well as time to device failure for aseptic reasons with the competing risk of mortality. Using the methods described in this paper, the random forest could be applied to risk prediction in a wide variety of medical fields. Issues related to implementation are discussed. 相似文献
4.
用于分类的随机森林和Bagging分类树比较 总被引:1,自引:0,他引:1
借助试验数据,从两种理论分析角度解释随机森林算法优于Bagging分类树算法的原因。将两种算法表述在两种不同的框架下,消除了这两种算法分析中的一些模糊之处。尤其在第二种分析框架下,更能清楚的看出,之所以随机森林算法优于Bagging分类树算法,是因为随机森林算法对应更小的偏差。 相似文献
5.
Nicole H. Augustin Stefan Lang Monica Musio Klaus von Wilpert 《Journal of the Royal Statistical Society. Series C, Applied statistics》2007,56(1):29-50
Summary. The data that are analysed are from a monitoring survey which was carried out in 1994 in the forests of Baden-Württemberg, a federal state in the south-western region of Germany. The survey is part of a large monitoring scheme that has been carried out since the 1980s at different spatial and temporal resolutions to observe the increase in forest damage. One indicator for tree vitality is tree defoliation, which is mainly caused by intrinsic factors, age and stand conditions, but also by biotic (e.g. insects) and abiotic stresses (e.g. industrial emissions). In the survey, needle loss of pine-trees and many potential covariates are recorded at about 580 grid points of a 4 km × 4 km grid. The aim is to identify a set of predictors for needle loss and to investigate the relationships between the needle loss and the predictors. The response variable needle loss is recorded as a percentage in 5% steps estimated by eye using binoculars and categorized into healthy trees (10% or less), intermediate trees (10–25%) and damaged trees (25% or more). We use a Bayesian cumulative threshold model with non-linear functions of continuous variables and a random effect for spatial heterogeneity. For both the non-linear functions and the spatial random effect we use Bayesian versions of P -splines as priors. Our method is novel in that it deals with several non-standard data requirements: the ordinal response variable (the categorized version of needle loss), non-linear effects of covariates, spatial heterogeneity and prediction with missing covariates. The model is a special case of models with a geoadditive or more generally structured additive predictor. Inference can be based on Markov chain Monte Carlo techniques or mixed model technology. 相似文献
6.
The Bimodal Normal distribution introduced by Alavi (2011) is a symmetric distribution where its variance is three times the variance of the corresponding normal distribution. Azzalini (1985) introduced the univariate Skew Normal distribution to model asymmetry data. In this paper the Skew Bimodal Normal–Normal distribution is introduced as a skew-symmetric distribution generated by the cumulative function of standard normal. Some properties of the distribution and some methods for generating data from this distribution are introduced. The maximum likelihood estimation of parameters is obtained. The distribution is fitted to the Old Faithful Geyser data. 相似文献
7.
The aim of this study was to investigate prediction of stem measurements of Scots pine(Pinus sylvestris L.) for a modern computerized forest harvester. We are interested in the prediction of stem curve measurements when measurements of stems already processed and a short section of the stem under process are known. The techniques presented here are based on cubic smoothing splines and on multivariate regression models. One advantage of these methods is that they do not assume any special functional form of the stem curve. They can also be applied to the prediction of branch limits and stem height of pine stems. 相似文献
8.
综合Adaboost算法的自适应再加权和随机森林算法的未修剪随机变量划分树基模型,文章提出了用于自适应随机森林算法。通过实验数据发现,在训练集较大、贝叶斯误差较小时,模拟自适应再加权会起作用,从而,拟自适应随机森林算法会优于随机森林算法。 相似文献
9.
M. Vijayabhama R. Jaisankar S. Varadha Raj K. Baranidharan 《Journal of applied statistics》2018,45(1):1-7
The low forest cover and productivity are the major obstacles for mitigating the demand supply gap of raw material for forest-based industries, which could be fulfilled from a tree outside forest area. Casuarina is a multi-utile, short rotation tree which adapts to all ecosystems. The casuarina wood is predominantly demanded for fuel, construction and paper industries which is mostly preferred by farmers, traders and industries. This study explores the spatial and temporal variability of casuarina spread in mitigating the gap of demand and supply in Tamil Nadu using a spatial autoregressive model. The spread of casuarina was spatially and temporally significant, which was negatively influenced by the gross area irrigated as main and direct effects and positively in an indirect effect. An assured irrigation forces the farmers to choose traditional agricultural crops for their livelihood in their own district. The increase in the price of casuarina would increase the spread of casuarina in both own district and neighbouring districts. The spread of casuarina would augment the supply of raw material for forest-based industries. 相似文献
10.
Vicente G. Cancho Edwin M.M. Ortega Gilberto A. Paula 《Journal of statistical planning and inference》2010
The purpose of this paper is to develop a Bayesian approach for log-Birnbaum–Saunders Student-t regression models under right-censored survival data. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the considered model. In order to attenuate the influence of the outlying observations on the parameter estimates, we present in this paper Birnbaum–Saunders models in which a Student-t distribution is assumed to explain the cumulative damage. Also, some discussions on the model selection to compare the fitted models are given and case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback–Leibler divergence. The developed procedures are illustrated with a real data set. 相似文献
11.
Kuo-Chin Lin 《Journal of applied statistics》2016,43(11):2053-2064
Categorical longitudinal data are frequently applied in a variety of fields, and are commonly fitted by generalized linear mixed models (GLMMs) and generalized estimating equations models. The cumulative logit is one of the useful link functions to deal with the problem involving repeated ordinal responses. To check the adequacy of the GLMMs with cumulative logit link function, two goodness-of-fit tests constructed by the unweighted sum of squared model residuals using numerical integration and bootstrap resampling technique are proposed. The empirical type I error rates and powers of the proposed tests are examined by simulation studies. The ordinal longitudinal studies are utilized to illustrate the application of the two proposed tests. 相似文献
12.
Environmental variables have an important effect on the reliability of many products such as coatings and polymeric composites. Long-term prediction of the performance or service life of such products must take into account the probabilistic/stochastic nature of the outdoor weather. In this article, we propose a time series modeling procedure to model the time series data of daily accumulated degradation. Daily accumulated degradation is the total amount of degradation accrued within one day and can be obtained by using a degradation rate model for the product and the weather data. The fitted model of the time series can then be used to estimate the future distribution of cumulative degradation over a period of time, and to compute reliability measures such as the probability of failure. The modeling technique and estimation method are illustrated using the degradation of a solar reflector material. We also provide a method to construct approximate confidence intervals for the probability of failure. 相似文献
13.
M. H. Lee 《统计学通讯:模拟与计算》2013,42(10):1909-1922
The underlying assumption for the design of control charts is the measurements within a sample are independently distributed. However, there are many situations where the uncorrelation assumption may be unacceptable in practice. In this paper, the economic design of cumulative sum (CUSUM) control chart for correlated data within a sample is developed. The genetic algorithm is applied to find the optimal design parameters of the CUSUM control chart by minimizing the cost function. An illustrative example is given. A sensitivity analysis is then conducted to evaluate the effects of cost parameters, process parameters, and correlation coefficient on the economic design. 相似文献
14.
This paper considers the problem of modeling migraine severity assessments and their dependence on weather and time characteristics.
We take on the viewpoint of a patient who is interested in an individual migraine management strategy. Since factors influencing
migraine can differ between patients in number and magnitude, we show how a patient’s headache calendar reporting the severity
measurements on an ordinal scale can be used to determine the dominating factors for this special patient. One also has to
account for dependencies among the measurements. For this the autoregressive ordinal probit (AOP) model of Müller and Czado
(J Comput Graph Stat 14: 320–338, 2005) is utilized and fitted to a single patient’s migraine data by a grouped move multigrid Monte Carlo (GM-MGMC) Gibbs sampler.
Initially, covariates are selected using proportional odds models. Model fit and model comparison are discussed. A comparison
with proportional odds specifications shows that the AOP models are preferred. 相似文献
15.
Stefan Nygaard Hansen Per Kragh Andersen Erik Thorlund Parner 《Lifetime data analysis》2014,20(4):584-598
A method based on pseudo-observations has been proposed for direct regression modeling of functionals of interest with right-censored data, including the survival function, the restricted mean and the cumulative incidence function in competing risks. The models, once the pseudo-observations have been computed, can be fitted using standard generalized estimating equation software. Regression models can however yield problematic results if the number of covariates is large in relation to the number of events observed. Guidelines of events per variable are often used in practice. These rules of thumb for the number of events per variable have primarily been established based on simulation studies for the logistic regression model and Cox regression model. In this paper we conduct a simulation study to examine the small sample behavior of the pseudo-observation method to estimate risk differences and relative risks for right-censored data. We investigate how coverage probabilities and relative bias of the pseudo-observation estimator interact with sample size, number of variables and average number of events per variable. 相似文献
16.
《Journal of Statistical Computation and Simulation》2012,82(12):1441-1456
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule. 相似文献
17.
18.
Designing and integrating composite networks for monitoring multivariate gaussian pollution fields 总被引:2,自引:0,他引:2
J. V. Zidek W. Sun & N. D. Le 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(1):63-79
Networks of ambient monitoring stations are used to monitor environmental pollution fields such as those for acid rain and air pollution. Such stations provide regular measurements of pollutant concentrations. The networks are established for a variety of purposes at various times so often several stations measuring different subsets of pollutant concentrations can be found in compact geographical regions. The problem of statistically combining these disparate information sources into a single 'network' then arises. Capitalizing on the efficiencies so achieved can then lead to the secondary problem of extending this network. The subject of this paper is a set of 31 air pollution monitoring stations in southern Ontario. Each of these regularly measures a particular subset of ionic sulphate, sulphite, nitrite and ozone. However, this subset varies from station to station. For example only two stations measure all four. Some measure just one. We describe a Bayesian framework for integrating the measurements of these stations to yield a spatial predictive distribution for unmonitored sites and unmeasured concentrations at existing stations. Furthermore we show how this network can be extended by using an entropy maximization criterion. The methods assume that the multivariate response field being measured has a joint Gaussian distribution conditional on its mean and covariance function. A conjugate prior is used for these parameters, some of its hyperparameters being fitted empirically. 相似文献
19.
Petros E. Maravelakis 《Journal of applied statistics》2012,39(2):323-336
The performance of the cumulative sum (CUSUM) control chart for the mean when measurement error exists is investigated. It is shown that the CUSUM chart is greatly affected by the measurement error. A similar result holds for the case of the CUSUM chart for the mean with linearly increasing variance. In this paper, we consider multiple measurements to reduce the effect of measurement error on the charts performance. Finally, a comparison of the CUSUM and EWMA charts is presented and certain recommendations are given. 相似文献
20.
In this paper we examine the consequences, for statistical analysis and interpretation, of the particulate nature of radioactive contamination of a nuclear weapons test site. We propose a probabilistic model which incorporates the particulate nature of the contamination and which is simple enough to be statistically fitted to the data. Parameter estimation involves the reconciliation and combination of measurements of (a) 59.5 ke V gamma rays from americium-241, a decay product of plutonium-241, using a portable medium resolution NaI detector, on a regular survey grid at a test site and (b) 59.5 ke V radiation from soil samples obtained at grid points. The implications of the model for measurement of levels of contamination are considered. 相似文献