共查询到20条相似文献,搜索用时 31 毫秒
1.
Cibele Queiroz da-Silva Eduardo G. Martins Vinícius Bonato Sérgio Furtado dos Reis 《统计学通讯:模拟与计算》2013,42(4):816-828
We develop a series of Bayesian statistical models for estimating survival of a neotropic didelphid marsupial, the Brazilian gracile mouse opossum (Gracilinanus microtarsus). These models are based on the Cormack–Jolly–Seber model (Cormack, 1964; Jolly 1965; Seber 1965) with both survival and recapture rates expressed as a function of covariates using a logit link. The proposed models allow taking into account heterogeneity in capture probability caused by the existence of different groups of individuals in the population. The models were applied to two cohorts (Cohort, 2000, 2001) with the first one including 14 and the second one 15 sampling occasions. The best models for each of the cohorts indicate that G. microtarsus is best described as partially semelparous, a condition in which mortality after the first mating is high but graded over time, with a fraction of males surviving for a second breeding season (Boonstra, 2005). 相似文献
2.
Interval estimation of the difference of two independent binomial proportions is an important problem in many applied settings. Newcombe (1998) compared the performance of several existing asymptotic methods, and based on the results obtained, recommended a method known as Wilson's method, a modified version of a method originally proposed for single binomial proportion. In this article, we propose a method based on profile likelihood, where the likelihood is weighted by noninformative Jeffrey' prior. By doing extensive simulations, we find that the proposed method performs well compared to Wilson's method. A SAS/IML program implementing this method is also given with this article. 相似文献
3.
We consider non-parametric estimation of a continuous cdf of a random vector (X 1, X 2). With bivariate RC data, it is stated in van der Laan (1996, p. 59810, Ann. Statist.), Quale et al. (2006, JASA) etc. that “it is well known that the NPMLE for continuous data is inconsistent (Tsai et al. (1986)).” The claim is based on a result in Tsai et al. (1986, p.1352, Ann. Statist.) that if X 1 is right censored but not X 2, then common ways for defining one NPMLE lead to inconsistency. If X 1 is right censored and X 2 is type I right-censored (which includes the case in Tsai et al.), we present a consistent NPMLE. The result corrects a common misinterpretation of Tsai's example (Tsai et al., 1986, Ann. Statist.). 相似文献
4.
Nezhat Shakeri 《统计学通讯:理论与方法》2013,42(5):777-790
Left censoring concept has been defined in different ways in statistical applications. Turnbull (1974) defines it in a particular way. Whereas in recent literature, especially in epidemiological studies, it has been defined in another way. This difference between the two approaches is the main reason that despite simplicity, Turnbull method cannot be applicable in all cases of doubly censored data. In this article we present a modified Turnbull method for analysis of doubly censored data adequate with recent definition. Comparison has been done with other statistical methods, including imputation estimator, full likelihood-based and conditional likelihood-based approach using Iranian HIV data. 相似文献
5.
6.
Samaradasa Weerahandi 《统计学通讯:理论与方法》2013,42(22):4069-4095
Motivated by a number of drawbacks of classical methods of point estimation, we generalize the definitions of point estimation, and address such notions as unbiasedness and estimation under constraints. The utility of the extension is shown by deriving more reliable estimates for small coefficients of regression models, and for variance components and random effects of mixed models. The extension is in the spirit of generalized confidence intervals introduced by Weerahandi (1993) and should encourage much needed further research in point estimation in unbalanced models, multi-variate models, non normal models, and nonlinear models. 相似文献
7.
In this article, a multivariate threshold varying conditional correlation (TVCC) model is proposed. The model extends the idea of Engle (2002) and Tse and Tsui (2002) to a threshold framework. This model retains the interpretation of the univariate threshold GARCH model and allows for dynamic conditional correlations. Techniques of model identification, estimation, and model checking are developed. Some simulation results are reported on the finite sample distribution of the maximum likelihood estimate of the TVCC model. Real examples demonstrate the asymmetric behavior of the mean and the variance in financial time series and the ability of the TVCC model to capture these phenomena. 相似文献
8.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997) as well as by Koukouvinos et al. (2008). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation. 相似文献
9.
Pao-Sheng Shen 《统计学通讯:模拟与计算》2013,42(3):603-612
In this article, we consider the M-estimators for the linear regression model when both response and covariate variables are subject to double censoring. The proposed estimators are constructed as some functional of three types of estimators for a bivariate survival distribution. The first two estimators are the generalizations of the Campbell and Földes (1982) and Dabrowska (1988) estimators proposed by Shen (2009). The third estimator is the generalization of the Prentice and Cai (1992) estimator. The consistency of the proposed M-estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators. Furthermore, the simple bootstrap methods are used to estimate standard deviations and construct interval estimators. 相似文献
10.
In this article, we find designs insensitive to the presence of an outlier in a diallel cross design setup for estimating a complete set of orthonormal contrasts among the effects of the general combining abilities of a set of parental lines. The criterion of robustness, suggested by Mandal (1989) in block design setup and used by Biswas (2012) in treatment-control setup, is adapted here. Complete diallel cross designs, suggested by Gupta and Kageyama (1994), and partial diallel cross designs, suggested by Gupta et al. (1995) and Mukerjee (1997), are found to be robust under certain conditions. 相似文献
11.
ABSTRACT This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996) that the standard regression-based t-test (Heckman, 1979) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982), are robust to nonnormality but have very little power. 相似文献
12.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
13.
Several panel unit root tests that account for cross-section dependence using a common factor structure have been proposed in the literature recently. Pesaran's (2007) cross-sectionally augmented unit root tests are designed for cases where cross-sectional dependence is due to a single factor. The Moon and Perron (2004) tests which use defactored data are similar in spirit but can account for multiple common factors. The Bai and Ng (2004a) tests allow to determine the source of nonstationarity by testing for unit roots in the common factors and the idiosyncratic factors separately. Breitung and Das (2008) and Sul (2007) propose panel unit root tests when cross-section dependence is present possibly due to common factors, but the common factor structure is not fully exploited. This article makes four contributions: (1) it compares the testing procedures in terms of similarities and differences in the data generation process, tests, null, and alternative hypotheses considered, (2) using Monte Carlo results it compares the small sample properties of the tests in models with up to two common factors, (3) it provides an application which illustrates the use of the tests, and (4) finally, it discusses the use of the tests in modelling in general. 相似文献
14.
M. Hakan Satman 《统计学通讯:模拟与计算》2013,42(5):644-652
The authors introduce an algorithm for estimating the least trimmed squares (LTS) parameters in large data sets. The algorithm performs a genetic algorithm search to form a basic subset that is unlikely to contain outliers. Rousseeuw and van Driessen (2006) suggested drawing independent basic subsets and iterating C-steps many times to minimize LTS criterion. The authors 'algorithm constructs a genetic algorithm to form a basic subset and iterates C-steps to calculate the cost value of the LTS criterion. Genetic algorithms are successful methods for optimizing nonlinear objective functions but they are slower in many cases. The genetic algorithm configuration in the algorithm can be kept simple because a small number of observations are searched from the data. An R package is prepared to perform Monte Carlo simulations on the algorithm. Simulation results show that the performance of the algorithm is suitable for even large data sets because a small number of trials is always performed. 相似文献
15.
The distributions of coherent systems with components with exchangeable lifetimes can be represented as mixtures of distributions of order statistics (k-out-of-n systems) from possibly dependent samples by using the concept of the signature of Samaniego (1985). This representation, together with Rychlik's (1993) results, can be used to obtain sharp bounds on the distribution (or the reliability) function and on the expected lifetime of the system. Also, this representation can be used to determine the asymptotic behavior of the hazard rate of the system when the order statistics are ordered in the hazard rate order. Moreover, the lifetime distributions of coherent systems (and in particular, of order statistics) can also be represented as generalized mixtures (that is, mixtures with some negative weights) of distributions of series system lifetimes by using the concept of the minimal signature defined by Navarro et al. (2007a). This representation can also be used to determine the final behavior of the hazard rate of the system through the behavior of the hazard rate of the series systems. In particular, it can be used to show that the order statistics are, under some conditions, asymptotically hazard rate ordered. However, in general, this result is not true, that is, the order statistics need not be hazard rate ordered. 相似文献
16.
This article presents results concerning the performance of both single equation and system panel cointegration tests and estimators. The study considers the tests developed in Pedroni (1999, 2004), Westerlund (2005), Larsson et al. (2001), and Breitung (2005) and the estimators developed in Phillips and Moon (1999), Pedroni (2000), Kao and Chiang (2000), Mark and Sul (2003), Pedroni (2001), and Breitung (2005). We study the impact of stable autoregressive roots approaching the unit circle, of I(2) components, of short-run cross-sectional correlation and of cross-unit cointegration on the performance of the tests and estimators. The data are simulated from three-dimensional individual specific VAR systems with cointegrating ranks varying from zero to two for fourteen different panel dimensions. The usual specifications of deterministic components are considered. 相似文献
17.
Pao-sheng Shen 《统计学通讯:理论与方法》2013,42(20):3319-3328
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called length-biased sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008) developed estimation procedures for proportional hazards model. In this article, by modeling growth function as a function of covariates, we demonstrate that Ghosh (2008)'s approach can be extended to the case when each subject has a specific growth function. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators for the regression parameters in the proportional and additive hazards model. 相似文献
18.
By applying the recursion of Huffer (1988) repeatedly, we propose an algorithm for evaluating the null joint distribution of Dixon-type test statistics for testing discordancy of k upper outliers in exponential samples. By using the critical values of Dixon-type test statistics determined from the proposed algorithm and those of Cochran-type test statistics presented earlier by Lin and Balakrishnan (2009), we carry out an extensive Monte Carlo study to investigate the powers and the error probabilities for the effects of masking and swamping when the number of outliers k = 2 and 3. Based on our empirical findings, we recommend Rosner’s (1975) sequential test procedure based on Dixon-type test statistics for testing multiple outliers from an exponential distribution. 相似文献
19.
Gadre and Rattihalli [5] have introduced the Modified Group Runs (MGR) control chart to identify the increases in fraction non-conforming and to detect shifts in the process mean. The MGR chart reduces the out-of-control average time-to-signal (ATS), as compared with most of the well-known control charts. In this article, we develop the Side Sensitive Modified Group Runs (SSMGR) chart to detect shifts in the process mean. With the help of numerical examples, it is illustrated that the SSMGR chart performs better than the Shewhart's X¯ chart, the synthetic chart [12], the Group Runs chart [4], the Side Sensitive Group Runs chart [6], as well as the MGR chart [5]. In some situations it is also superior to the Cumulative Sum chart p9] and the exponentially weighed moving average chart [10]. In the steady state also, its performance is better than the above charts. 相似文献
20.
《统计学通讯:理论与方法》2012,41(16-17):3198-3210
The randomized response (RR) technique with two decks of cards proposed by Odumade and Singh (2009) can always be made more efficient than the RR techniques proposed by Warner (1965), Mangat and Singh (1990), and Mangat (1994) by adjusting the proportion of cards in the decks. The proposed method of Odumade and Singh (2009) is limited to simple random sampling with replacement (SRSWR) sampling only. In this article, generalization of Odumade and Singh strategy is provided for complex survey designs and a wider class of estimators. The results of Odumade and Singh (2009) can be derived from the proposed method as a special case. 相似文献