首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of testing for treatment effect based on binary response data is considered, assuming that the sample size for each experimental unit and treatment combination is random. It is assumed that the sample size follows a distribution that belongs to a parametric family. The uniformly most powerful unbiased tests, which are equivalent to the likelihood ratio tests, are obtained when the probability of the sample size being zero is positive. For the situation where the sample sizes are always positive, the likelihood ratio tests are derived. These test procedures, which are unconditional on the random sample sizes, are useful even when the random sample sizes are not observed. Some examples are presented as illustration.  相似文献   

2.
Life distributions with hazard rate functions of the form r(t) = Ag(t) + Bh(t) are considered. It is assumed that g(t) and h(t) are known and are independent of the unknown parameters A and B. The maximum likelihood estimators are studied for complete and censored samples. The estimation problem is reduced to a solution of one equation with one unknown parameter and it is observed that the solution is unique. The estimation procedure under the assumption of aging is also described. Some comments about the asymptotic variance-covariance matrix are given, and tests of hypo-theses are described for some cases.  相似文献   

3.
The problem of estimating the risk corresponding to the reconstruction of a random pattern is reviewed. It is shown that for a particular but important model, the problem is reduced to the estimation of two parameters closely related to those appearing in a two-state Markov chain, which is of independent interest. The estimation of the Markov chain's parameters is studied from the decision-theoretic point of view. Estimators which are better than others previously considered are obtained and adapted to the estimation of the corresponding risk. Examples are are analyzed; even if a very empirical method is used to give values to the parameters of an a priorilaw, some good estimators of the risk are obtained.  相似文献   

4.
the estimation of variance components of heteroscedastic random model is discussed in this paper. Maximum Likelihood (ML) is described for one-way heteroscedastic random models. The proportionality condition that cell variance is proportional to the cell sample size, is used to eliminate the efffect of heteroscedasticity. The algebraic expressions of the estimators are obtained for the model. It is seen that the algebraic expressions of the estimators depend mainly on the inverse of the variance-covariance matrix of the observation vector. So, the variance-covariance matrix is obtained and the formulae for the inversions are given. A Monte Carlo study is conducted. Five different variance patterns with different numbers of cells are considered in this study. For each variance pattern, 1000 Monte Carlo samples are drawn. Then the Monte Carlo biases and Monte Carlo MSE’s of the estimators of variance components are calculated. In respect of both bias and MSE, the Maximum Likelihood (ML) estimators of variance components are found to be sufficiently good.  相似文献   

5.
The problem of interval estimation of the stress–strength reliability involving two independent Weibull distributions is considered. An interval estimation procedure based on the generalized variable (GV) approach is given when the shape parameters are unknown and arbitrary. The coverage probabilities of the GV approach are evaluated by Monte Carlo simulation. Simulation studies show that the proposed generalized variable approach is very satisfactory even for small samples. For the case of equal shape parameter, it is shown that the generalized confidence limits are exact. Some available asymptotic methods for the case of equal shape parameter are described and their coverage probabilities are evaluated using Monte Carlo simulation. Simulation studies indicate that no asymptotic approach based on the likelihood method is satisfactory even for large samples. Applicability of the GV approach for censored samples is also discussed. The results are illustrated using an example.  相似文献   

6.
In this paper the use of Kronecker designs for factorial experiments is considered. The two-factor Kronecker design is considered in some detail and the efficiency factors of the main effects and interaction in such a design are derived. It is shown that the efficiency factor of the interaction is at least as large as the product of the efficiency factors of the two main effects and when both the component designs are totally balanced then its efficiency factor will be higher than the efficiency factor of either of the two main effects. If the component designs are nearly balanced then its efficiency factor will be approximately at least as large as the efficiency factor of either of the two main effects. It is argued that these designs are particularly useful for factorial experiments.Extensions to the multi-factor design are given and it is proved that the two-factor Kronecker design will be connected if the component designs are connected.  相似文献   

7.
Estimating equations which are not necessarily likelihood-based score equations are becoming increasingly popular for estimating regression model parameters. This paper is concerned with estimation based on general estimating equations when true covariate data are missing for all the study subjects, but surrogate or mismeasured covariates are available instead. The method is motivated by the covariate measurement error problem in marginal or partly conditional regression of longitudinal data. We propose to base estimation on the expectation of the complete data estimating equation conditioned on available data. The regression parameters and other nuisance parameters are estimated simultaneously by solving the resulting estimating equations. The expected estimating equation (EEE) estimator is equal to the maximum likelihood estimator if the complete data scores are likelihood scores and conditioning is with respect to all the available data. A pseudo-EEE estimator, which requires less computation, is also investigated. Asymptotic distribution theory is derived. Small sample simulations are conducted when the error process is an order 1 autoregressive model. Regression calibration is extended to this setting and compared with the EEE approach. We demonstrate the methods on data from a longitudinal study of the relationship between childhood growth and adult obesity.  相似文献   

8.
A structural regression model is considered in which some of the variables are measured with error. Instead of additive measurement errors, systematic biases are allowed by relating true and observed values via simple linear regressions. Additional data is available, based on standards, which allows for “calibration” of the measuring methods involved. Using only moment assumptions, some simple estimators are proposed and their asymptotic properties are developed. The results parallel and extend those given by Fuller (1987) in which the errors are additive and the error covariance is estimated. Maximum likelihood estimation is also discussed and the problem is illustrated using data from an acid rain study in which the relationship between pH and alkalinity is of interest but neither variable is observed exactly.  相似文献   

9.
Self-exciting threshold autoregressive moving average (SETARMA) nonlinear time-series model is considered here. Sufficient conditions for invertibility and stationarity are derived. Parameter estimation algorithm is developed by employing real-coded genetic algorithm stochastic optimization procedure. A significant feature of the work done is that optimal out-of-sample forecasts up to three-step ahead and their forecast error variances are derived analytically. Relevant computer programs are written in statistical analysis system (SAS) and C. As an illustration, annual mackerel catch time-series data are considered. Forecast performance of the fitted model for hold-out data is evaluated by using Naive and Monte Carlo approaches. It is found that optimal out-of-sample forecast values are quite close to actual values and estimated variances are quite close to theoretical values. Superiority of the SETARMA model over the SETAR model for equal predictive ability through Diebold–Mariano test is also established.  相似文献   

10.
In dealing with ties in failure time data the mechanism by which the data are observed should be considered. If the data are discrete, the process is relatively simple and is determined by what is actually observed. With continuous data, ties are not supposed to occur, but they do because the data are grouped into intervals (even if only rounding intervals). In this case there is actually a non–identifiability problem which can only be resolved by modelling the process. Various reasonable modelling assumptions are investigated in this paper. They lead to better ways of dealing with ties between observed failure times and censoring times of different individuals. The current practice is to assume that the censoring times occur after all the failures with which they are tied.  相似文献   

11.
Data in many experiments arises as curves and therefore it is natural to use a curve as a basic unit in the analysis, which is in terms of functional data analysis (FDA). Functional curves are encountered when units are observed over time. Although the whole function curve itself is not observed, a sufficiently large number of evaluations, as is common with modern recording equipment, is assumed to be available. In this article, we consider the statistical inference for the mean functions in the two samples problem drawn from functional data sets, in which we assume that functional curves are observed, that is, we consider the test if these two groups of curves have the same mean functional curve when the two groups of curves without noise are observed. The L 2-norm based and bootstrap-based test statistics are proposed. It is shown that the proposed methodology is flexible. Simulation study and real-data examples are used to illustrate our techniques.  相似文献   

12.
Staudte  R.G.  Zhang  J. 《Lifetime data analysis》1997,3(4):383-398
The p-value evidence for an alternative to a null hypothesis regarding the mean lifetime can be unreliable if based on asymptotic approximations when there is only a small sample of right-censored exponential data. However, a guarded weight of evidence for the alternative can always be obtained without approximation, no matter how small the sample, and has some other advantages over p-values. Weights of evidence are defined as estimators of 0 when the null hypothesis is true and 1 when the alternative is true, and they are judged on the basis of the ensuing risks, where risk is mean squared error of estimation. The evidence is guarded in that a preassigned bound is placed on the risk under the hypothesis. Practical suggestions are given for choosing the bound and for interpreting the magnitude of the weight of evidence. Acceptability profiles are obtained by inversion of a family of guarded weights of evidence for two-sided alternatives to point hypotheses, just as confidence intervals are obtained from tests; these profiles are arguably more informative than confidence intervals, and are easily determined for any level and any sample size, however small. They can help understand the effects of different amounts of censoring. They are found for several small size data sets, including a sample of size 12 for post-operative cancer patients. Both singly Type I and Type II censored examples are included. An examination of the risk functions of these guarded weights of evidence suggests that if the censoring time is of the same magnitude as the mean lifetime, or larger, then the risks in using a guarded weight of evidence based on a likelihood ratio are not much larger than they would be if the parameter were known.  相似文献   

13.
Generalized Laplacian distribution is considered. A new distribution called geometric generalized Laplacian distribution is introduced and its properties are studied. First- and higher-order autoregressive processes with these stationary marginal distributions are developed and studied. Simulation studies are conducted and trajectories of the process are obtained for selected values of the parameters. Various areas of application of these models are discussed.  相似文献   

14.
An extension of the stochastic process associated with the geometric distribution is presented. Combinatorial arguments are used to derive probabilities for various events of interest. Probabilities are approximated by evaluating truncated series. Bounds on the errors of approximation are developed. An example is presented and some additional applications are noted.  相似文献   

15.
A multitype epidemic model is analysed assuming proportionate mixing between types. Estimation procedures for the susceptibilities and infectivities are derived for three sets of data: complete data, meaning that the whole epidemic process is observed continuously; the removal processes are observed continuously; only the final state is observed. Under the assumption of a major outbreak in a population of size n it is shown that, for all three data sets, the susceptibility estimators are always efficient, i.e. consistent with a √ n rate of convergence. The infectivity estimators are 'in most cases' respectively efficient, efficient and unidentifiable. However, if some susceptibilities are equal then the corresponding infectivity estimators are respectively barely consistent (√log( n ) rate of convergence), not consistent and unidentifiable. The estimators are applied to simulated data.  相似文献   

16.
The two-sample problem for comparing Weibull scale parameters is studied for randomly censored data. Three different test statistics are considered and their asymptotic properties are established under a sequence of local alternatives, It is shown that both the test statistic based on the mlefs (maximum likelihood estimators) and the likelihood ratio test are asymptotically optimum. The third statistic based only on the number of failures is not, Asymptotic relative efficiency of this statistic is obtained and its numerical values are computed for uniform and Weibull censoring, Effects of uniform random censoring on the censoring level of the experiment are illus¬trated, A direct proof for the joint asymptotic normality of the mlefs of the shape and the scale parameters is also given  相似文献   

17.
Pharmaceutical companies and manufacturers of food products are legally required to label the product's shelf‐life on the packaging. For pharmaceutical products the requirements for how to determine the shelf‐life are highly regulated. However, the regulatory documents do not specifically define the shelf‐life. Instead, the definition is implied through the estimation procedure. In this paper, the focus is on the situation where multiple batches are used to determine a label shelf‐life that is applicable to all future batches. Consequently, the short‐comings of existing estimation approaches are discussed. These are then addressed by proposing a new definition of shelf‐life and label shelf‐life, where greater emphasis is placed on within and between batch variability. Furthermore, an estimation approach is developed and the properties of this approach are illustrated using a simulation study. Finally, the approach is applied to real data.  相似文献   

18.
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying format for representing frequentist inference for a single parameter. The confidence distribution, which depends on data, is exact (unbiased) when its cumulative distribution function evaluated at the true parameter is uniformly distributed over the unit interval. A new version of the Neyman–Pearson lemma is given, showing that the confidence distribution based on the natural statistic in exponential models with continuous data is less dispersed than all other confidence distributions, regardless of how dispersion is measured. Approximations are necessary for discrete data, and also in many models with nuisance parameters. Approximate pivots might then be useful. A pivot based on a scalar statistic determines a likelihood in the parameter of interest along with a confidence distribution. This proper likelihood is reduced of all nuisance parameters, and is appropriate for meta‐analysis and updating of information. The reduced likelihood is generally different from the confidence density. Confidence distributions and reduced likelihoods are rooted in Fisher–Neyman statistics. This frequentist methodology has many of the Bayesian attractions, and the two approaches are briefly compared. Concepts, methods and techniques of this brand of Fisher–Neyman statistics are presented. Asymptotics and bootstrapping are used to find pivots and their distributions, and hence reduced likelihoods and confidence distributions. A simple form of inverting bootstrap distributions to approximate pivots of the abc type is proposed. Our material is illustrated in a number of examples and in an application to multiple capture data for bowhead whales.  相似文献   

19.
Various methods for estimating the parameters of the simple harmonic curve and corresponding statistics for testing the significance of the sinusoidal trend are investigated. The locally reasonable method is almost fully efficient when the size of the trend is very small; however, the maximum likelihood method is preferred generally, especially when the trend is not very small. The log likelihood ratio test is more powerful than the R test which is based on locally reasonable estimates. The efficient method and the log likelihood ratio or equivalent tests are the best statistical techniques for identifying the cyclical trend. Thus they are the methods of choice when adequate computing facilities are available.  相似文献   

20.
Summary.  The pattern of absenteeism in the downsizing process of companies is a topic in focus in economics and social science. A general question is whether employees who are frequently absent are more likely to be selected to be laid off or in contrast whether employees to be dismissed are more likely to be absent for the remaining time of their working contract. We pursue an empirical and microeconomic investigation of these theses. We analyse longitudinal data that were collected in a German company over several years. We fit a semiparametric transition model based on a mixture Poisson distribution for the days of absenteeism per month. Prediction intervals are considered and the primary focus is on the period of downsizing. The data reveal clear evidence for the hypothesis that employees who are to be laid off are more frequently absent before leaving the company. Interestingly, though, no clear evidence is seen that employees being selected to leave the company are those with a bad absenteeism profile.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号