首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 334 毫秒
1.
Data in many experiments arises as curves and therefore it is natural to use a curve as a basic unit in the analysis, which is in terms of functional data analysis (FDA). Functional curves are encountered when units are observed over time. Although the whole function curve itself is not observed, a sufficiently large number of evaluations, as is common with modern recording equipment, is assumed to be available. In this article, we consider the statistical inference for the mean functions in the two samples problem drawn from functional data sets, in which we assume that functional curves are observed, that is, we consider the test if these two groups of curves have the same mean functional curve when the two groups of curves without noise are observed. The L 2-norm based and bootstrap-based test statistics are proposed. It is shown that the proposed methodology is flexible. Simulation study and real-data examples are used to illustrate our techniques.  相似文献   

2.
Testing for space-time clusters of unknown size   总被引:1,自引:0,他引:1  
The Knox test is widely used in epidemiology to test for infection or contagion, which is indicated by an excess of cases that are close both in space and time. Often, however, the values of the space and time critical parameters that define 'closeness' are unknown. An exact test is proposed for this situation, its computer implementation is described and examples of its use on published data sets are given. Other possible applications of the method adopted here are discussed. This is an example of a test where nuisance parameters are absent under H0, and where the distribution of the test statistic can be found numerically.  相似文献   

3.
A structural regression model is considered in which some of the variables are measured with error. Instead of additive measurement errors, systematic biases are allowed by relating true and observed values via simple linear regressions. Additional data is available, based on standards, which allows for “calibration” of the measuring methods involved. Using only moment assumptions, some simple estimators are proposed and their asymptotic properties are developed. The results parallel and extend those given by Fuller (1987) in which the errors are additive and the error covariance is estimated. Maximum likelihood estimation is also discussed and the problem is illustrated using data from an acid rain study in which the relationship between pH and alkalinity is of interest but neither variable is observed exactly.  相似文献   

4.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

5.
Lu Lin  Yongxin Liu 《Statistics》2017,51(4):745-765
We consider a partially piecewise regression in which the main regression coefficients are constant in all subdomains, but the extraessential regression function is variable in different pieces and is difficult to be estimated. Under this situation, two new regression methodologies are proposed under the criteria of mini-max-risk and mini-mean-risk. The resulting models can describe the regression relations in maximum-risk and mean-risk environments, respectively. A two-stage estimation procedure, together with a composite method, is introduced. The asymptotic normality of the estimators is established, the standard convergence rate and efficiency are achieved. Some unusual features of the new estimators and predictions, and the related variable selection are discussed for a comprehensive comparison. Simulation studies and a real-financial example are given to illustrate the new methodologies.  相似文献   

6.
In this paper, an extension of Horvitz–Thompson estimator used in adaptive cluster sampling to continuous universe is developed. Main new results are presented in theorems. The primary notions of discrete population are transferred to continuous population. First and second order inclusion probabilities for networks are delivered. Horvitz–Thompson estimator for adaptive cluster sampling in continuous universe is constructed. The unbiasedness of the estimator is proven. Variance and unbiased variance estimator are delivered. Finally, the theory is illustrated with an example.  相似文献   

7.
In this article maximum likelihood techniques for estimating consumer demand functions when budget constraints are piecewise linear are exposited and surveyed. Consumer demand functions are formally derived under such constraints, and it is shown that the functions are themselves nonlinear as a result. The econometric problems in estimating such functions are exposited, and the importance of the stochastic specification is stressed, in particular the specification of both unobserved heterogeneity of preferences and measurement error. Econometric issues in estimation and testing are discussed, and the results of the studies that have been conducted to date are surveyed.  相似文献   

8.
In forensic science, in order to determine whether sets of traces are from the same source or not, it is widely advocated to evaluate evidential value of similarity of the traces by likelihood ratios (LRs). If traces are expressed by measurements following a two-level model with random effects and known variances, closed LR formulas are available given normality, or kernel density distributions, on the effects. For the known variances estimators are used though, which leads to uncertainty on the resulting LRs which is hard to quantify. The above is analyzed in an approach in which both effects and variances are random, following standard prior distributions on univariate data, leading to posterior LRs. For non-informative and conjugate priors, closed LR formulas are obtained that are interesting in structure and generalize a known result given fixed variance. A semi-conjugate prior on the model seems usable in many applications. It is described how to obtain credible intervals using Monte Carlo Markov Chain and regular simulation, and an example is described for comparison of XTC tablets based on MDMA content. In this way, uncertainty on LR estimation is expressed more clearly which makes the evidential value more transparent in a judicial context.  相似文献   

9.
Most real-world shapes and images are characterized by high variability- they are not rigid, like crystals, for example—but they are strongly structured. Therefore, a fundamental task in the understanding and analysis of such image ensembles is the construction of models that incorporate both variability and structure in a mathematically precise way. The global shape models introduced in Grenander's general pattern theory are intended to do this. In this paper, we describe the representation of two-dimensional mitochondria and membranes in electron microscope photographs, and three-dimensional amoebae in optical sectioning microscopy. There are three kinds of variability to all of these patterns, which these representations accommodate. The first is the variability in shape and viewing orientation. For this, the typical structure is represented via linear, circular and spherical templates, with the variability accomodated via the application of transformations applied to the templates. The transformations form groups: scale, rotation and translation. They are locally applied throughout the continuum and of high dimension. The second is the textural variability; the inside and outside of these basic shapes are subject to random variation, as well as sensor noise. For this, statistical sensor models and Markov random field texture models are used to connect the constituent structures of the shapes to the measured data. The third variability type is associated with the fact that each scene is made up of a variable number of shapes; this number is not assumed to be known a priori. Each scene has a variable number of parameters encoding the transformations of the templates appropriate for that scene. For this, a single posterior distribution is defined over the countable union of spaces representing models of varying numbers of shapes. Bayesian inference is performed via computation of the conditional expectation of the parametrically defined shapes under the posterior. These conditional mean estimates are generated using jump-diffusion processes. Results for membranes, mitochondria and amoebae are shown.  相似文献   

10.
The following two predictors are compared for time series with systematically missing observations: (a) A time series model is fitted to the full series Xt , and forecasts are based on this model, (b) A time series model is fitted to the series with systematically missing observations Y τ, and forecasts are based on the resulting model. If the data generation processes are known vector autoregressive moving average (ARMA) processes, the first predictor is at least as efficient as the second one in a mean squared error sense. Conditions are given for the two predictors to be identical. If only the ARMA orders of the generation processes are known and the coefficients are estimated, or if the process orders and coefficients are estimated, the first predictor is again, in general, superior. There are, however, exceptions in which the second predictor, using seemingly less information, may be better. These results are discussed, using both asymptotic theory and small sample simulations. Some economic time series are used as illustrative examples.  相似文献   

11.
The gamma process is a natural model for degradation processes in which deterioration is supposed to take place gradually over time in a sequence of tiny increments. When units or individuals are observed over time it is often apparent that they degrade at different rates, even though no differences in treatment or environment are present. Thus, in applying gamma-process models to such data, it is necessary to allow for such unexplained differences. In the present paper this is accomplished by constructing a tractable gamma-process model incorporating a random effect. The model is fitted to some data on crack growth and corresponding goodness-of-fit tests are carried out. Prediction calculations for failure times defined in terms of degradation level passages are developed and illustrated.  相似文献   

12.
Summary.  In a modern computer-based forest harvester, tree stems are run in sequence through the measuring equipment root end first, and simultaneously the length and diameter are stored in a computer. These measurements may be utilized for example in the determination of the optimal cutting points of the stems. However, a problem that is often passed over is that these variables are usually measured with error. We consider estimation and prediction of stem curves when the length and diameter measurements are subject to errors. It is shown that only in the simplest case of a first-order model can the estimation be carried out unbiasedly by using standard least squares procedures. However, both the first- and the second-degree models are unbiased in prediction. Also a study on real stem is used to illustrate the models that are discussed.  相似文献   

13.
The classical problem of change point is considered when the data are assumed to be correlated. The nuisance parameters in the model are the initial level μ and the common variance σ2. The four cases, based on none, one, and both of the parameters are known are considered. Likelihood ratio tests are obtained for testing hypotheses regarding the change in level, δ, in each case. Following Henderson (1986), a Bayesian test is obtained for the two sided alternative. Under the Bayesian set up, a locally most powerful unbiased test is derived for the case μ=0 and σ2=1. The exact null distribution function of the Bayesian test statistic is given an integral representation. Methods to obtain exact and approximate critical values are indicated.  相似文献   

14.
It is often the case in mixture experiments that some of the ingredients, such as additives or flavourings, are included with proportions constrained to lie in a restricted interval, while the majority of the mixture is made up of a particular ingredient used as a filler. The experimental region in such cases is restricted to a parallelepiped in or near one corner of the full simplex region. In this paper, orthogonally blocked designs with two experimental blends on each edge of the constrained region are considered for mixture experiments with three and four ingredients. The optimal symmetric orthogonally blocked designs within this class are determined and it is shown that even better designs are obtained for the asymmetric situation, in which some experimental blends are taken at the vertices of the experimental region. Some examples are given to show how these ideas may be extended to identify good designs in three and four blocks. Finally, an example is included to illustrate how to overcome the problems of collinearity that sometimes occur when fitting quadratic models to experimental data from mixture experiments in which some of the ingredient proportions are restricted to small values.  相似文献   

15.
Summary.  The pattern of absenteeism in the downsizing process of companies is a topic in focus in economics and social science. A general question is whether employees who are frequently absent are more likely to be selected to be laid off or in contrast whether employees to be dismissed are more likely to be absent for the remaining time of their working contract. We pursue an empirical and microeconomic investigation of these theses. We analyse longitudinal data that were collected in a German company over several years. We fit a semiparametric transition model based on a mixture Poisson distribution for the days of absenteeism per month. Prediction intervals are considered and the primary focus is on the period of downsizing. The data reveal clear evidence for the hypothesis that employees who are to be laid off are more frequently absent before leaving the company. Interestingly, though, no clear evidence is seen that employees being selected to leave the company are those with a bad absenteeism profile.  相似文献   

16.
In dealing with ties in failure time data the mechanism by which the data are observed should be considered. If the data are discrete, the process is relatively simple and is determined by what is actually observed. With continuous data, ties are not supposed to occur, but they do because the data are grouped into intervals (even if only rounding intervals). In this case there is actually a non–identifiability problem which can only be resolved by modelling the process. Various reasonable modelling assumptions are investigated in this paper. They lead to better ways of dealing with ties between observed failure times and censoring times of different individuals. The current practice is to assume that the censoring times occur after all the failures with which they are tied.  相似文献   

17.
Realized Volatility: A Review   总被引:1,自引:1,他引:0  
This article reviews the exciting and rapidly expanding literature on realized volatility. After presenting a general univariate framework for estimating realized volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized.  相似文献   

18.
Most real-world shapes and images are characterized by high variability- they are not rigid, like crystals, for example—but they are strongly structured. Therefore, a fundamental task in the understanding and analysis of such image ensembles is the construction of models that incorporate both variability and structure in a mathematically precise way. The global shape models introduced in Grenander's general pattern theory are intended to do this. In this paper, we describe the representation of two-dimensional mitochondria and membranes in electron microscope photographs, and three-dimensional amoebae in optical sectioning microscopy. There are three kinds of variability to all of these patterns, which these representations accommodate. The first is the variability in shape and viewing orientation. For this, the typical structure is represented via linear, circular and spherical templates, with the variability accomodated via the application of transformations applied to the templates. The transformations form groups: scale, rotation and translation. They are locally applied throughout the continuum and of high dimension. The second is the textural variability; the inside and outside of these basic shapes are subject to random variation, as well as sensor noise. For this, statistical sensor models and Markov random field texture models are used to connect the constituent structures of the shapes to the measured data. The third variability type is associated with the fact that each scene is made up of a variable number of shapes; this number is not assumed to be known a priori. Each scene has a variable number of parameters encoding the transformations of the templates appropriate for that scene. For this, a single posterior distribution is defined over the countable union of spaces representing models of varying numbers of shapes. Bayesian inference is performed via computation of the conditional expectation of the parametrically defined shapes under the posterior. These conditional mean estimates are generated using jump-diffusion processes. Results for membranes, mitochondria and amoebae are shown.  相似文献   

19.
the estimation of variance components of heteroscedastic random model is discussed in this paper. Maximum Likelihood (ML) is described for one-way heteroscedastic random models. The proportionality condition that cell variance is proportional to the cell sample size, is used to eliminate the efffect of heteroscedasticity. The algebraic expressions of the estimators are obtained for the model. It is seen that the algebraic expressions of the estimators depend mainly on the inverse of the variance-covariance matrix of the observation vector. So, the variance-covariance matrix is obtained and the formulae for the inversions are given. A Monte Carlo study is conducted. Five different variance patterns with different numbers of cells are considered in this study. For each variance pattern, 1000 Monte Carlo samples are drawn. Then the Monte Carlo biases and Monte Carlo MSE’s of the estimators of variance components are calculated. In respect of both bias and MSE, the Maximum Likelihood (ML) estimators of variance components are found to be sufficiently good.  相似文献   

20.
This article reviews the exciting and rapidly expanding literature on realized volatility. After presenting a general univariate framework for estimating realized volatilities, a simple discrete time model is presented in order to motivate the main results. A continuous time specification provides the theoretical foundation for the main results in this literature. Cases with and without microstructure noise are considered, and it is shown how microstructure noise can cause severe problems in terms of consistent estimation of the daily realized volatility. Independent and dependent noise processes are examined. The most important methods for providing consistent estimators are presented, and a critical exposition of different techniques is given. The finite sample properties are discussed in comparison with their asymptotic properties. A multivariate model is presented to discuss estimation of the realized covariances. Various issues relating to modelling and forecasting realized volatilities are considered. The main empirical findings using univariate and multivariate methods are summarized.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号