共查询到20条相似文献,搜索用时 62 毫秒
1.
Housila P. Singh 《统计学通讯:理论与方法》2017,46(2):521-531
This paper aimed at providing an efficient new unbiased estimator for estimating the proportion of a potentially sensitive attribute in survey sampling. The suggested randomization device makes use of the means, variances of scrambling variables, and the two scalars lie between “zero” and “one.” Thus, the same amount of information has been used at the estimation stage. The variance formula of the suggested estimator has been obtained. We have compared the proposed unbiased estimator with that of Kuk (1990) and Franklin (1989), and Singh and Chen (2009) estimators. Relevant conditions are obtained in which the proposed estimator is more efficient than Kuk (1990) and Franklin (1989) and Singh and Chen (2009) estimators. The optimum estimator (OE) in the proposed class of estimators has been identified which finally depends on moments ratios of the scrambling variables. The variance of the optimum estimator has been obtained and compared with that of the Kuk (1990) and Franklin (1989) estimator and Singh and Chen (2009) estimator. It is interesting to mention that the “optimum estimator” of the class of estimators due to Singh and Chen (2009) depends on the parameter π under investigation which limits the use of Singh and Chen (2009) OE in practice while the proposed OE in this paper is free from such a constraint. The proposed OE depends only on the moments ratios of scrambling variables. This is an advantage over the Singh and Chen (2009) estimator. Numerical illustrations are given in the support of the present study when the scrambling variables follow normal distribution. Theoretical and empirical results are very sound and quite illuminating in the favor of the present study. 相似文献
2.
P. Druilhet 《统计学通讯:理论与方法》2017,46(24):12281-12289
We revisit the Flatland paradox proposed by Stone (1976), which is an example of non conglomerability. The main novelty in the analysis of the paradox is to consider marginal versus conditional models rather than proper versus improper priors. We show that in the first model a prior distribution should be considered as a probability measure, whereas, in the second one, a prior distribution should be considered in the projective space of measures. This induces two different kinds of limiting arguments which are useful to understand the paradox. We also show that the choice of a flat prior is not adapted to the structure of the parameter space and we consider an improper prior based on reference priors with nuisance parameters for which the Bayesian analysis matches the intuitive reasoning. 相似文献
3.
Xu Zhang 《统计学通讯:模拟与计算》2015,44(2):489-504
Efron and Petrosian (1999) formulated the problem of double truncation and proposed nonparametric methods on testing and estimation. An alternative estimation method was proposed by Shen (2010a), utilizing the inverse-probability-weighting technique. One aim of this paper was to assess the computational complexity of the existing estimation methods. Through a simulation study, we found that these two estimation methods have the same level of computational efficiency. The other aim was to study the noniterative IPW estimator under the condition that truncation variables are independent. The IPW estimator and the interval estimation was proved satisfactory in the simulation study. 相似文献
4.
Ted Speevak 《统计学通讯:理论与方法》2017,46(17):8419-8429
An inequality for the sum of squares of rank differences associated with Spearman’s rank correlation coefficient, when ties and missing data are present in both rankings, was established numerically in Loukas and Papaioannou (1991). That inequality is improved and generalized. 相似文献
5.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004). The Jeffreys’, probability matching and two other priors are used. 相似文献
6.
This article proposes new symmetric and asymmetric distributions applying methods analogous as the ones in Kim (2005) and Arnold et al. (2009) to the exponentiated normal distribution studied in Durrans (1992), that we call the power-normal (PN) distribution. The proposed bimodal extension, the main focus of the paper, is called the bimodal power-normal model and is denoted by BPN(α) model, where α is the asymmetry parameter. The authors give some properties including moments and maximum likelihood estimation. Two important features of the model proposed is that its normalizing constant has closed and simple form and that the Fisher information matrix is nonsingular, guaranteeing large sample properties of the maximum likelihood estimators. Finally, simulation studies and real applications reveal that the proposed model can perform well in both situations. 相似文献
7.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965)’s and Grieve (1985)’s approaches. 相似文献
8.
Rameela Chandrasekhar 《统计学通讯:理论与方法》2014,43(14):2951-2957
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design. 相似文献
9.
Thomas Parker 《统计学通讯:理论与方法》2017,46(11):5195-5202
In this note, it is shown that the finite-sample distributions of the Wald, likelihood ratio, and Lagrange multiplier statistics in the classical linear regression model are members of the generalized beta model introduced by McDonald and Xu (1995a). This is useful for examining the properties of these test statistics. For example, this characterization makes it easy to find distribution, quantile, and density functions for each test statistic, makes it clear why Wald tests may overreject the null hypothesis using asymptotic critical values, and formalizes the fact that the Lagrange multiplier statistic follows a distribution with bounded support. 相似文献
10.
Pao-Sheng Shen 《统计学通讯:理论与方法》2017,46(4):1916-1926
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008) developed estimation procedures for the Cox proportional hazards model. Shen (2011a) demonstrated that Ghosh (2008)'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators. 相似文献
11.
Under the second moment condition, we obtain Berry-Esseen bounds for random index non linear statistics by using a technique discussed in Chen and Shao (2007). A concept in this article is to approximate any random index non-linear statistic by a random index linear statistic. The bounds for random sums of independent random variables are also provided. Applications are the bounds for random U-statistics and random sums of the present values in investment analysis. 相似文献
12.
Sample size estimation for comparing the rates of change in two-arm repeated measurements has been investigated by many investigators. In contrast, the literature has paid relatively less attention to sample size estimation for studies with multi-arm repeated measurements where the design and data analysis can be more complex than two-arm trials. For continuous outcomes, Jung and Ahn (2004) and Zhang and Ahn (2013) have presented sample size formulas to compare the rates of change and time-averaged responses in multi-arm trials, using the generalized estimating equation (GEE) approach. To our knowledge, there has been no corresponding development for multi-arm trials with count outcomes. We present a sample size formula for comparing the rates of change in multi-arm repeated count outcomes using the GEE approach that accommodates various correlation structures, missing data patterns, and unbalanced designs. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of designing configurations. Simulation results suggest that empirical type I error and power are maintained close to their nominal levels. The proposed method is illustrated using an epileptic clinical trial example. 相似文献
13.
In this paper, the focus is on sequential analysis of multivariate financial time series with heavy tails. The mean vector and the covariance matrix of multivariate non linear models are simultaneously monitored by modifying conventional control charts to identify structural changes in the data. The considered target process is a constant conditional correlation model (cf. Bollerslev, 1990), an extended constant conditional correlation model (cf. He and Teräsvirta, 2004), a dynamic conditional correlation model (cf. Engle, 2002), or a generalized dynamic conditional correlation model (cf. Capiello et al., 2006). For statistical surveillance we use control charts based on residuals. Further, the procedures are constructed for t-distribution. The detection speed of these charts is compared via Monte Carlo simulation. In the empirical study, the procedure with the best performance is applied to log-returns of the stock market indices FTSE and CAC. 相似文献
14.
This paper treats the problem of stochastic comparisons for the extreme order statistics arising from heterogeneous beta distributions. Some sufficient conditions involved in majorization-type partial orders are provided for comparing the extreme order statistics in the sense of various magnitude orderings including the likelihood ratio order, the reversed hazard rate order, the usual stochastic order, and the usual multivariate stochastic order. The results established here strengthen and extend those including Kochar and Xu (2007), Mao and Hu (2010), Balakrishnan et al. (2014), and Torrado (2015). A real application in system assembly and some numerical examples are also presented to illustrate the theoretical results. 相似文献
15.
Gabriel Rodríguez 《统计学通讯:模拟与计算》2016,45(1):207-221
In recent articles, Fajardo et al. (2009) and Reisen and Fajardo (2012) propose an alternative semiparametric estimator of the fractional parameter in ARFIMA models which is robust to the presence of additive outliers. The results are very interesting, however, they use samples of 300 or 800 observations which are rarely found in macroeconomics. In order to perform a comparison, I estimate the fractional parameter using the procedure of Geweke and Porter-Hudak (1983) augmented with dummy variables associated with the (previously) detected outliers using the statistic τd suggested by Perron and Rodríguez (2003). Comparing with Fajardo et al. (2009) and Reisen and Fajardo (2012), I found better results for the mean and bias of the fractional parameter when T = 100 and the results in terms of the standard deviation and the MSE are very similar. However, for higher sample sizes such as 300 or 800, the robust procedure performs better. Empirical applications for seven monthly Latin-American inflation series with very small sample sizes contaminated by additive outliers are discussed. 相似文献
16.
Amir T. Payandeh Najafabadi Fatemeh Atatalab Maryam Omidi Najafabadi 《统计学通讯:理论与方法》2017,46(1):415-426
Credibility formula has been developed in many fields of actuarial sciences. Based upon Payandeh (2010), this article extends concept of credibility formula to relatively premium of a given rate-making system. More precisely, it calculates Payandeh’s (2010) credibility factor for zero-inflated Poisson gamma distributions with respect to several loss functions. A comparison study has been given. 相似文献
17.
Techniques used in variability assessment are subsequently used to draw conclusions regarding the “spread”/uniformity of data curves. Due to the limitations of these techniques, they are not adequate for circumstances where data manifest with multiple peaks. Examples of these manifestations (in three-dimensional space) include under-foot pressure distributions recorded for different types of footwear (Becerro-de-Bengoa-Vallejo et al., 2014; Cibulka et al., 1994; Davies et al., 2003), surface textures and interfaces designed to impact friction, and and and molecular surface structures such as viral epitopes (Torras and Garcia-Valls, 2004; Pacejka, 1997; Fustaffson, 1997). This article proposes a technique for generating a single variable – Λ that will quantify the uniformity of such surfaces. We define and validate this technique using several mathematical and graphical models. 相似文献
18.
Daniel Melser 《商业与经济统计学杂志》2018,36(3):516-522
Scanner data are increasingly being used in the calculation of price indexes such as the CPI. The preeminent approach is the RYGEKS method (Ivancic, Diewert and Fox 2011). This uses multilateral methods to construct price parities across a rolling year then links these to construct a nonrevisable index. While this approach performs well there remain some unresolved issues, in particular; the optimal window length and the linking method. In this note, these questions are addressed. A novel linking method is proposed along with the use of weighted GEKS as opposed to a fixed window. These approaches are illustrated empirically on a large scanner dataset and perform well. 相似文献
19.
Since the seminal paper of Ghirardato (1997), it is known that Fubini theorem for non additive measures can be available only for functions as “slice-comonotonic” in the framework of product algebra. Later, inspired by Ghirardato (1997), Chateauneuf and Lefort (2008) obtained some Fubini theorems for non additive measures in the framework of product σ-algebra. In this article, we study Fubini theorem for non additive measures in the framework of g-expectation. We give some different assumptions that provide Fubini theorem in the framework of g-expectation. 相似文献