首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
Most statistical models arising in real life applications as well as in interdisciplinary research are complex in their designs, sampling plans, and associated probability laws, which in turn are often constrained by inequality, order, functional, shape or other restraints. Optimality of conventional likelihood ratio based statistical inference may not be tenable here, although the use of restricted or quasi-likelihood has spurred in such environments. S.N. Roy's ingenious union–intersection principle provides an alternative avenue, often having some computational advantages, increased scope of adaptability, and flexibility beyond conventional likelihood paradigms. This scenario is appraised here with some illustrative examples, and with some interesting problems of inference on stochastic ordering (dominance) in parametric as well as beyond parametric setups.  相似文献   

2.
Scientific experiments commonly result in clustered discrete and continuous data. Existing methods for analyzing such data include the use of quasi-likelihood procedures and generalized estimating equations to estimate marginal mean response parameters. In applications to areas such as developmental toxicity studies, where discrete and continuous measurements are recorded on each fetus, or clinical ophthalmologic trials, where different types of observations are made on each eye, the assumption that data within cluster are exchangeable is often very reasonable. We use this assumption to formulate fully parametric regression models for clusters of bivariate data with binary and continuous components. The regression models proposed have marginal interpretations and reproducible model structures. Tractable expressions for likelihood equations are derived and iterative schemes are given for computing efficient estimates (MLEs) of the marginal mean, correlations, variances and higher moments. We demonstrate the use the ‘exchangeable’ procedure with an application to a developmental toxicity study involving fetal weight and malformation data.  相似文献   

3.
We develop a class of new multivariate procedures for monitoring quality by detecting a change in the level of a multivariate process. Following the ideas of S.N. Roy, we first consider a linear combination statistic which results from projecting the multivariate observations onto a unit vector and then maximizing a selected univariate statistic over all directions.  相似文献   

4.
A new lifetime distribution is proposed and studied. The Harris extended exponential is obtained from a mixture of the exponential and Harris distributions, which arises from a branching process. Several structural properties of the new distribution are discussed, including moments, generating function and order statistics. The new distribution can model data with increasing or decreasing failure rate. The shape of the hazard rate function is controlled by one of the added parameters in an uncomplicated manner. An application to a real dataset illustrates the usefulness of the new distribution.  相似文献   

5.
Structural equation modeling (SEM) typically utilizes first- and second-order moment structures. This limits its applicability since many unidentified models and many equivalent models that researchers would like to distinguish are created. In this paper, we relax this restriction and assume non-normal distributions on exogenous variables. We shall provide a solution to the problems of underidentifiability and equivalence of SEM models by making use of non-normality (higher-order moment structures). The non-normal SEM is applied to finding the possible direction of a path in simple regression models. The method of (generalized) least squares is employed to estimate model parameters. A test statistic for examining a fit of a model is proposed. A simulation result and a real data example are reported to study how the non-normal SEM approach works empirically.  相似文献   

6.
This paper extends the idea of Vincze (1978) and unifies the approach for the uniparameter and multiparameter situations for obtaining the Cramér-Rao inequality.  相似文献   

7.
It is shown that Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique for estimating the variance of a normal distribution can be extended to estimating a general scale parameter in the presence of a nuisance parameter. Employing standard monotone likelihood ratio-type conditions, a new class of improved estimators for this scale parameter is derived under quadratic loss. By imposing an additional condition, a broader class of improved estimators is obtained. The dominating procedures are in form analogous to those in Strawderman [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198]. Application of the general results to the exponential distribution yields new sufficient conditions, other than those of Brewster and Zidek [1974. Improving on equivariant estimators. Ann. Statist. 2, 21–38] and Kubokawa [1994. A unified approach to improving equivariant estimators. Ann. Statist. 22, 290–299], for improving the best affine equivariant estimator of the scale parameter. A class of estimators satisfying the new conditions is constructed. The results shed new light on Strawderman's [1974. Minimax estimation of powers of the variance of a normal population under squared error loss. Ann. Statist. 2, 190–198] technique.  相似文献   

8.
The results of analyzing experimental data using a parametric model may heavily depend on the chosen model for regression and variance functions, moreover also on a possibly underlying preliminary transformation of the variables. In this paper we propose and discuss a complex procedure which consists in a simultaneous selection of parametric regression and variance models from a relatively rich model class and of Box-Cox variable transformations by minimization of a cross-validation criterion. For this it is essential to introduce modifications of the standard cross-validation criterion adapted to each of the following objectives: 1. estimation of the unknown regression function, 2. prediction of future values of the response variable, 3. calibration or 4. estimation of some parameter with a certain meaning in the corresponding field of application. Our idea of a criterion oriented combination of procedures (which usually if applied, then in an independent or sequential way) is expected to lead to more accurate results. We show how the accuracy of the parameter estimators can be assessed by a “moment oriented bootstrap procedure", which is an essential modification of the “wild bootstrap” of Härdle and Mammen by use of more accurate variance estimates. This new procedure and its refinement by a bootstrap based pivot (“double bootstrap”) is also used for the construction of confidence, prediction and calibration intervals. Programs written in Splus which realize our strategy for nonlinear regression modelling and parameter estimation are described as well. The performance of the selected model is discussed, and the behaviour of the procedures is illustrated, e.g., by an application in radioimmunological assay.  相似文献   

9.
The purpose of this article is to introduce a new class of extended E(s2)-optimal two level supersaturated designs obtained by adding runs to an existing E(s2)-optimal two level supersaturated design. The extended design is a union of two optimal SSDs belonging to different classes. New lower bound to E(s2) has been obtained for the extended supersaturated designs. Some examples and a small catalogue of E(s2)-optimal SSDs are also included.  相似文献   

10.
A p-value is developed for testing the equivalence of the variances of a bivariate normal distribution. The unknown correlation coefficient is a nuisance parameter in the problem. If the correlation is known, the proposed p-value provides an exact test. For large samples, the p-value can be computed by replacing the unknown correlation by the sample correlation, and the resulting test is quite satisfactory. For small samples, it is proposed to compute the p-value by replacing the unknown correlation by a scalar multiple of the sample correlation. However, a single scalar is not satisfactory, and it is proposed to use different scalars depending on the magnitude of the sample correlation coefficient. In order to implement this approach, tables are obtained providing sub-intervals for the sample correlation coefficient, and the scalars to be used if the sample correlation coefficient belongs to a particular sub-interval. Once such tables are available, the proposed p-value is quite easy to compute since it has an explicit analytic expression. Numerical results on the type I error probability and power are reported on the performance of such a test, and the proposed p-value test is also compared to another test based on a rejection region. The results are illustrated with two examples: an example dealing with the comparability of two measuring devices, and an example dealing with the assessment of bioequivalence.  相似文献   

11.
It is frequently the case that a response will be related to both a vector of finite length and a function-valued random variable as predictor variables. In this paper, we propose new estimators for the parameters of a partial functional linear model which explores the relationship between a scalar response variable and mixed-type predictors. Asymptotic properties of the proposed estimators are established and finite sample behavior is studied through a small simulation experiment.  相似文献   

12.
Using 1998 and 1999 singleton birth data of the State of Florida, we study the stability of classification trees. Tree stability depends on both the learning algorithm and the specific data set. In this study, test samples are used in statistical learning to evaluate both stability and predictive performance. We also use the resampling technique bootstrap, which can be regarded as data self-perturbation, to evaluate the sensitivity of the modeling algorithm with respect to the specific data set. We demonstrate that the selection of the cost function plays an important role in stability. In particular, classifiers with equal misclassification costs and equal priors are less stable compared to those with unequal misclassification costs and equal priors.  相似文献   

13.
We consider a multivariate linear model for multivariate controlled calibration, and construct some conservative confidence regions, which are nonempty and invariant under nonsingular transformations. The computation of our confidence region is easier compared to some of the existing procedures. We illustrate the results using two examples. The simulation results show the closeness of the coverage probability of our confidence regions to the assumed confidence level.  相似文献   

14.
A modified large-sample (MLS) approach and a generalized confidence interval (GCI) approach are proposed for constructing confidence intervals for intraclass correlation coefficients. Two particular intraclass correlation coefficients are considered in a reliability study. Both subjects and raters are assumed to be random effects in a balanced two-factor design, which includes subject-by-rater interaction. Computer simulation is used to compare the coverage probabilities of the proposed MLS approach (GiTTCH) and GCI approaches with the Leiva and Graybill [1986. Confidence intervals for variance components in the balanced two-way model with interaction. Comm. Statist. Simulation Comput. 15, 301–322] method. The competing approaches are illustrated with data from a gauge repeatability and reproducibility study. The GiTTCH method maintains at least the stated confidence level for interrater reliability. For intrarater reliability, the coverage is accurate in several circumstances but can be liberal in some circumstances. The GCI approach provides reasonable coverage for lower confidence bounds on interrater reliability, but its corresponding upper bounds are too liberal. Regarding intrarater reliability, the GCI approach is not recommended because the lower bound coverage is liberal. Comparing the overall performance of the three methods across a wide array of scenarios, the proposed modified large-sample approach (GiTTCH) provides the most accurate coverage for both interrater and intrarater reliability.  相似文献   

15.
Moments and central moments of a random variable X   are expressed as integrals of functions of lower-order conditional moments and the cumulative distribution of XX. In particular, sample central moments of order 2k2k are expressed as the sum of between groups variations, providing an analogue to the analysis of variance. Similar expressions are obtained for the expectations of real-valued and measurable functions of XX.  相似文献   

16.
In this paper, the hypothesis testing and interval estimation for the intraclass correlation coefficients are considered in a two-way random effects model with interaction. Two particular intraclass correlation coefficients are described in a reliability study. The tests and confidence intervals for the intraclass correlation coefficients are developed when the data are unbalanced. One approach is based on the generalized p-value and generalized confidence interval, the other is based on the modified large-sample idea. These two approaches simplify to the ones in Gilder et al. [2007. Confidence intervals on intraclass correlation coefficients in a balanced two-factor random design. J. Statist. Plann. Inference 137, 1199–1212] when the data are balanced. Furthermore, some statistical properties of the generalized confidence intervals are investigated. Finally, some simulation results to compare the performance of the modified large-sample approach with that of the generalized approach are reported. The simulation results indicate that the modified large-sample approach performs better than the generalized approach in the coverage probability and expected length of the confidence interval.  相似文献   

17.
A harmonic new better than used in expectation (HNBUE) variable is a random variable which is dominated by an exponential distribution in the convex stochastic order. We use a recently obtained condition on stochastic equality under convex domination to derive characterizations of the exponential distribution and bounds for HNBUE variables based on the mean values of the order statistics of the variable. We apply the results to generate discrepancy measures to test if a random variable is exponential against the alternative that is HNBUE, but not exponential.  相似文献   

18.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

19.
20.
This paper considers the problem of testing a sub-hypothesis in homoscedastic linear regression models where errors form long memory moving average processes and designs are non-random. Unlike in the random design case, asymptotic null distribution of the likelihood ratio type test based on the Whittle quadratic form is shown to be non-standard and non-chi-square. Moreover, the rate of consistency of the minimum Whittle dispersion estimator of the slope parameter vector is shown to be n-(1-α)/2n-(1-α)/2, different from the rate n-1/2n-1/2 obtained in the random design case, where αα is the rate at which the error spectral density explodes at the origin. The proposed test is shown to be consistent against fixed alternatives and has non-trivial asymptotic power against local alternatives that converge to null hypothesis at the rate n-(1-α)/2n-(1-α)/2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号