首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we develop two general classes of discrete bivariate distributions. We derive general formulas for the joint distributions belonging to the classes. The obtained formulas for the joint distributions are very general in the sense that new families of distributions can be generated just by specifying the “baseline seed distributions.” The dependence structures of the bivariate distributions belonging to the proposed classes, along with basic statistical properties, are also discussed. New families of discrete bivariate distributions are generated from the classes. Furthermore, to assess the usefulness of the proposed classes, two discrete bivariate distributions generated from the classes are applied to analyze a real dataset and the results are compared with those obtained from conventional models.  相似文献   

2.
The 2 × 2 tables used to present the data in an experiment for comparing two proportions by means of two observations of two independent binomial distributions may appear simple but are not. The debate about the best method to use is unending, and has divided statisticians into practically irreconcilable groups. In this article, all the available non-asymptotic tests are reviewed (except the Bayesian methodology). The author states which is the optimal (for each group), referring to the tables and programs that exist for them, and contrast the arguments used by supporters of each of the options. They also sort the tangle of solutions into "families", based on the methodology used and/or prior assumptions, and point out the most frequent methodological mistakes committed when comparing the different families.  相似文献   

3.
In this article, we analyze interval censored failure time data with competing risks. A new estimator for the cumulative incidence function is derived using an approximate likelihood and a test statistic to compare two samples is then obtained by extending Sun's test statistic. Small sample properties of the proposed methods are examined by conducting simulations and a cohort dataset from AIDS patients is analyzed as a real example.  相似文献   

4.
In social interaction studies, one commonly encounters repeated displays of behaviors along with their duration data. Statistical methods for the analysis of such data use either parametric (e.g., Weibull) or semi-nonparametric (e.g., Cox) proportional hazard models, modified to include random effects (frailty) which account for the correlation of repeated occurrences of behaviors within a unit (dyad). However, dyad-specific random effects by themselves are not able to account for the ordering of event occurrences within dyads. The occurrence of an event (behavior) can make further occurrences of the same behavior to be more or less likely during an interaction. This article develops event-dependent random effects models for analyzing repeated behaviors data using a Bayesian approach. The models are illustrated by a dataset relating to emotion regulation in families with children who have behavioral or emotional problems.  相似文献   

5.
In this article, we propose some families of estimators for finite population variance of post-stratified sample mean using information on two auxiliary variables. The families of estimators are discussed in their optimum cases. The MSE of these estimators are derived to the first order of approximation. The percent relative efficiency of proposed families of estimators has been demonstrated with the numerical illustrations.  相似文献   

6.
Inequality-restricted hypotheses testing methods containing multivariate one-sided testing methods are useful in practice, especially in multiple comparison problems. In practice, multivariate and longitudinal data often contain missing values since it may be difficult to observe all values for each variable. However, although missing values are common for multivariate data, statistical methods for multivariate one-sided tests with missing values are quite limited. In this article, motivated by a dataset in a recent collaborative project, we develop two likelihood-based methods for multivariate one-sided tests with missing values, where the missing data patterns can be arbitrary and the missing data mechanisms may be non-ignorable. Although non-ignorable missing data are not testable based on observed data, statistical methods addressing this issue can be used for sensitivity analysis and might lead to more reliable results, since ignoring informative missingness may lead to biased analysis. We analyse the real dataset in details under various possible missing data mechanisms and report interesting findings which are previously unavailable. We also derive some asymptotic results and evaluate our new tests using simulations.  相似文献   

7.
实现“快、精、准”的关键在于统计管理体制改革   总被引:1,自引:1,他引:0       下载免费PDF全文
 党的十一届三中全会以来,我国统计工作进入了一个崭新的发展时期。回顾十余年来我国统计工作的经验教训,笔者认为,统计信息服务要达到“快、精、准”的要求,其关键在于统计管理体制的改革。  相似文献   

8.
Sample size determination is essential during the planning phases of clinical trials. To calculate the required sample size for paired right-censored data, the structure of the within-paired correlations needs to be pre-specified. In this article, we consider using popular parametric copula models, including the Clayton, Gumbel, or Frank families, to model the distribution of joint survival times. Under each copula model, we derive a sample size formula based on the testing framework for rank-based tests and non-rank-based tests (i.e., logrank test and Kaplan–Meier statistic, respectively). We also investigate how the power or the sample size was affected by the choice of testing methods and copula model under different alternative hypotheses. In addition to this, we examine the impacts of paired-correlations, accrual times, follow-up times, and the loss to follow-up rates on sample size estimation. Finally, two real-world studies are used to illustrate our method and R code is available to the user.  相似文献   

9.
ABSTRACT

Panel datasets have been increasingly used in economics to analyze complex economic phenomena. Panel data is a two-dimensional array that combines cross-sectional and time series data. Through constructing a panel data matrix, the clustering method is applied to panel data analysis. This method solves the heterogeneity question of the dependent variable, which belongs to panel data, before the analysis. Clustering is a widely used statistical tool in determining subsets in a given dataset. In this article, we present that the mixed panel dataset is clustered by agglomerative hierarchical algorithms based on Gower's distance and by k-prototypes. The performance of these algorithms has been studied on panel data with mixed numerical and categorical features. The effectiveness of these algorithms is compared by using cluster accuracy. An experimental analysis is illustrated on a real dataset using Stata and R package software.  相似文献   

10.
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.  相似文献   

11.
In this article, the valuation of power option is investigated when the dynamic of the stock price is governed by a generalized jump-diffusion Markov-modulated model. The systematic risk is characterized by the diffusion part, and the non systematic risk is characterized by the pure jump process. The jumps are described by a generalized renewal process with generalized jump amplitude. By introducing NASDAQ Index Model, their risk premium is identified respectively. A risk-neutral measure is identified by employing Esscher transform with two families of parameters, which represent the two parts risk premium. In this article, the non systematic risk premium is considered, based on which the price of power option is studied under the generalized jump-diffusion Markov-modulated model. In the case of a special renewal process with log double exponential jump amplitude, the accurate expressions for the Esscher parameters and the pricing formula are provided. By numerical simulation, the influence of the non systematic risk’s price and the index of the power options on the price of the option is depicted.  相似文献   

12.
We introduce two new general families of continuous distributions, generated by a distribution F and two positive real parameters α and β which control the skewness and tail weight of the distribution. The construction is motivated by the distribution of k-record statistics and can be derived by applying the inverse probability integral transformation to the log-gamma distribution. The introduced families are suitable for modelling the data with a significantly skewed and heavy-tailed distribution. Various properties of the introduced families are studied and a number of estimations and data fitness on real data are given to illustrate the results.  相似文献   

13.
A general method is presented for constructing a location estimator which is asymptotically efficient at any two different location-scale families of symmetric distributions as well as at an appropriately defined class of distributions lying in between. The method works by embedding the two families in a comprehensive parametric model and identifying the estimator with the MLE. The case when the families are Normal and Double exponential is examined in detail.  相似文献   

14.
In this article, we model the relationship between two circular variables using the circular regression models, to be called JS circular regression model, which was proposed by Jammalamadaka and Sarma (1993). The model has many interesting properties and is sensitive enough to detect the occurrence of outliers. We focus our attention on the problem of identifying outliers in this model. In particular, we extend the use of the COVRATIO statistic, which has been successfully used in the linear case for the same purpose, to the JS circular regression model via a row deletion approach. Through simulation studies, the cut-off points for the new procedure are obtained and its power of performance is investigated. It is found that the performance improves when the resulting residuals have small variance and when the sample size gets larger. An example of the application of the procedure is presented using a real dataset.  相似文献   

15.
In this article, we propose a new class of distributions defined by a quantile function, which nests several distributions as its members. The quantile function proposed here is the sum of the quantile functions of the generalized Pareto and Weibull distributions. Various distributional properties and reliability characteristics of the class are discussed. The estimation of the parameters of the model using L-moments is studied. Finally, we apply the model to a real life dataset.  相似文献   

16.
A Comparison of Frailty and Other Models for Bivariate Survival Data   总被引:1,自引:0,他引:1  
Multivariate survival data arise when eachstudy subject may experience multiple events or when study subjectsare clustered into groups. Statistical analyses of such dataneed to account for the intra-cluster dependence through appropriatemodeling. Frailty models are the most popular for such failuretime data. However, there are other approaches which model thedependence structure directly. In this article, we compare thefrailty models for bivariate data with the models based on bivariateexponential and Weibull distributions. Bayesian methods providea convenient paradigm for comparing the two sets of models weconsider. Our techniques are illustrated using two examples.One simulated example demonstrates model choice methods developedin this paper and the other example, based on a practical dataset of onset of blindness among patients with diabetic Retinopathy,considers Bayesian inference using different models.  相似文献   

17.
The Behrens–Fisher problem concerns the inferences for the difference between means of two independent normal populations without the assumption of equality of variances. In this article, we compare three approximate confidence intervals and a generalized confidence interval for the Behrens–Fisher problem. We also show how to obtain simultaneous confidence intervals for the three population case (analysis of variance, ANOVA) by the Bonferroni correction factor. We conduct an extensive simulation study to evaluate these methods in respect to their type I error rate, power, expected confidence interval width, and coverage probability. Finally, the considered methods are applied to two real dataset.  相似文献   

18.
In this article, we propose a denoising methodology in the wavelet domain based on a Bayesian hierarchical model using Double Weibull prior. We propose two estimators, one based on posterior mean (Double Weibull Wavelet Shrinker, DWWS) and the other based on larger posterior mode (DWWS-LPM), and show how to calculate them efficiently. Traditionally, mixture priors have been used for modeling sparse wavelet coefficients. The interesting feature of this article is the use of non-mixture prior. We show that the methodology provides good denoising performance, comparable even to state-of-the-art methods that use mixture priors and empirical Bayes setting of hyperparameters, which is demonstrated by extensive simulations on standardly used test functions. An application to real-word dataset is also considered.  相似文献   

19.
Polytomous Item Response Theory (IRT) models are used by specialists to score assessments and questionnaires that have items with multiple response categories. In this article, we study the performance of five model comparison criteria for comparing fit of the graded response and generalized partial credit models using the same dataset when the choice between the two is unclear. Simulation study is conducted to analyze the sensitivity of priors and compare the performance of the criteria using the No-U-Turn Sampler algorithm, under a Bayesian approach. The results were used to select a model for an application in mental health data.  相似文献   

20.
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号