共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper addresses the problem of unbiased estimation of P[X > Y] = θ for two independent exponentially distributed random variables X and Y. We present (unique) unbiased estimator of θ based on a single pair of order statistics obtained from two independent random samples from the two populations. We also indicate how this estimator can be utilized to obtain unbiased estimators of θ when only a few selected order statistics are available from the two random samples as well as when the samples are selected by an alternative procedure known as ranked set sampling. It is proved that for ranked set samples of size two, the proposed estimator is uniformly better than the conventional non-parametric unbiased estimator and further, a modified ranked set sampling procedure provides an unbiased estimator even better than the proposed estimator. 相似文献
2.
When there are several replicates available at each level combination of two factors, testing nonadditivity can be done by the usual two-way ANOVA method. However, the ANOVA method cannot be used when the experiment is unreplicated (one observation per cell of the two-way classification). Several tests have been developed to address nonadditivity in unreplicated experiments starting with Tukey's (1949) one-degree-of-freedom test for nonadditivity. Most of them assume that the interaction term has a multiplicative form. But such tests have low power if the assumed functional form is inappropriate. This leads to tests which do not assume a specific form for the interaction term. This paper proposes a new method for testing interaction which does not assume a specific form of interaction. The proposed test has the advantage over the earlier tests that it can also be used for incomplete two-way tables. A simulation study is performed to evaluate the power of the proposed test and compare it with other well-known tests. 相似文献
3.
Mauricio Sadinle 《统计学通讯:模拟与计算》2013,42(9):1909-1924
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method. 相似文献
4.
This paper extends the one-way heteroskedasticity score test of Holly and Gardiol (2000, In: Krishnakumar, J, Ronchetti, E (Eds.), Panel Data Econometrics: Future Directions, North-Holland, Amsterdam, pp. 199–211) to two conditional Lagrange Multiplier (LM) tests of heteroskedasticity under contiguous alternatives within the two-way error components model framework. In each case, the derivation of Rao's efficient score statistics for testing heteroskedasticity is first obtained. Then, based on a specific set of assumptions, the asymptotic distribution of the score under contiguous alternatives is established. Finally, the expression for the score test statistic in the presence of heteroskedasticity and related asymptotic local powers of these score test statistics are derived and discussed. 相似文献
5.
Lee-Shen Chen 《统计学通讯:理论与方法》2013,42(10):1635-1648
This article considers Bayesian p-values for testing independence in 2 × 2 contingency tables with cell counts observed from the two independent binomial sampling scheme and the multinomial sampling scheme. From the frequentist perspective, Fisher's p-value (p F ) is the most commonly used p-value but it can be conservative for small to moderate sample sizes. On the other hand, from the Bayesian perspective, Bayarri and Berger (2000) first proposed the partial posterior predictive p-value (p PPOST ), which can avoid the double use of the data that occurs in another Bayesian p-value proposed by Guttman (1967) and Rubin (1984), called the posterior predictive p-value (p POST ). The subjective and objective Bayesian p-values in terms of p POST and p PPOST are derived under the beta prior and the (noninformative) Jeffreys prior, respectively. Numerical comparisons among p F , p POST , and p PPOST reveal that p PPOST performs much better than p F and p POST for small to moderate sample sizes from the frequentist perspective. 相似文献
6.
Michael Kohler 《AStA Advances in Statistical Analysis》2008,92(2):153-178
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called continuation values, which we represent as regression functions defined recursively by using the continuation values of the next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning consistency and the estimates’ rate of convergence. 相似文献
7.
Guo-Liang Fan Han-Ying Liang Jiang-Feng Wang Hong-Xia Xu 《AStA Advances in Statistical Analysis》2010,94(1):89-103
In this paper, we establish the strong consistency and asymptotic normality for the least square (LS) estimators in simple linear errors-in-variables (EV) regression models when the errors form a stationary α-mixing sequence of random variables. The quadratic-mean consistency is also considered. 相似文献
8.
Lee-Shen Chen 《统计学通讯:理论与方法》2013,42(10):1649-1663
This article considers the problem of testing marginal homogeneity in a 2 × 2 contingency table. We first review some well-known conditional and unconditional p-values appeared in the statistical literature. Then we treat the p-value as the test statistic and use the unconditional approach to obtain the modified p-value, which is shown to be valid. For a given nominal level, the rejection region of the modified p-value test contains that of the original p-value test. Some nice properties of the modified p-value are given. Especially, under mild conditions the rejection region of the modified p-value test is shown to be the Barnard convex set as described by Barnard (1947). If the one-sided null hypothesis has two nuisance parameters, we show that this result can reduce the dimension of the nuisance parameter space from two to one for computing modified p-values and sizes of tests. Numerical studies including an illustrative example are given. Numerical comparisons show that the sizes of the modified p-value tests are closer to a nominal level than those of the original p-value tests for many cases, especially in the case of small to moderate sample sizes. 相似文献
9.
Wilson Confidence Intervals for the Two-Sample Log-Odds-Ratio in Stratified 2 × 2 Contingency Tables
Large-sample Wilson-type confidence intervals (CIs) are derived for a parameter of interest in many clinical trials situations: the log-odds-ratio, in a two-sample experiment comparing binomial success proportions, say between cases and controls. The methods cover several scenarios: (i) results embedded in a single 2 × 2 contingency table; (ii) a series of K 2 × 2 tables with common parameter; or (iii) K tables, where the parameter may change across tables under the influence of a covariate. The calculations of the Wilson CI require only simple numerical assistance, and for example are easily carried out using Excel. The main competitor, the exact CI, has two disadvantages: It requires burdensome search algorithms for the multi-table case and results in strong over-coverage associated with long confidence intervals. All the application cases are illustrated through a well-known example. A simulation study then investigates how the Wilson CI performs among several competing methods. The Wilson interval is shortest, except for very large odds ratios, while maintaining coverage similar to Wald-type intervals. An alternative to the Wald CI is the Agresti-Coull CI, calculated from the Wilson and Wald CIs, which has same length as the Wald CI but improved coverage. 相似文献
10.
Uwe Hassler Matei Demetrescu Adina I. Tarcolea 《AStA Advances in Statistical Analysis》2011,95(2):187-204
The asymptotically normal, regression-based LM integration test is adapted for panels with correlated units. The N different units may be integrated of different (fractional) orders under the null hypothesis. The paper first reviews conditions
under which the test statistic is asymptotically (as T→∞) normal in a single unit. Then we adopt the framework of seemingly unrelated regression [SUR] for cross-correlated panels,
and discuss a panel test statistic based on the feasible generalized least squares [GLS] estimator, which follows a χ
2(N) distribution. Third, a more powerful statistic is obtained by working under the assumption of equal deviations from the
respective null in all units. Fourth, feasible GLS requires inversion of sample covariance matrices typically imposing T>N; in addition we discuss alternative covariance matrix estimators for T<N. The usefulness of our results is assessed in Monte Carlo experimentation. 相似文献
11.
In this paper we consider the conditional Koziol–Green model of Braekers and Veraverbeke [2008. A conditional Koziol–Green model under dependent censoring. Statist. Probab. Lett., accepted for publication] in which they generalized the Koziol–Green model of Veraverbeke and Cadarso Suárez [2000. Estimation of the conditional distribution in a conditional Koziol–Green model. Test 9, 97–122] by assuming that the association between a censoring time and a time until an event is described by a known Archimedean copula function. They got in this way, an informative censoring model with two different types of informative censoring. Braekers and Veraverbeke [2008. A conditional Koziol–Green model under dependent censoring. Statist. Probab. Lett., accepted for publication] derived in this model a non-parametric Koziol–Green estimator for the conditional distribution function of the time until an event, for which they showed the uniform consistency and the asymptotic normality. In this paper we extend their results and prove the weak convergence of the process associated to this estimator. Furthermore we show that the conditional Koziol–Green estimator is asymptotically more efficient in this model than the general copula-graphic estimator of Braekers and Veraverbeke [2005. A copula-graphic estimator for the conditional survival function under dependent censoring. Canad. J. Statist. 33, 429–447]. As last result, we construct an asymptotic confidence band for the conditional Koziol–Green estimator. Through a simulation study, we investigate the small sample properties of this asymptotic confidence band. Afterwards we apply this estimator and its confidence band on a practical data set. 相似文献
12.
A non-parametric transformation function is introduced to transform data to any continuous distribution. When transformation of data to normality is desired, the use of a suitable parametric pre-transformation function improves the performance of the proposed non-parametric transformation function. The resulting semi-parametric transformation function is shown empirically, via a Monte Carlo study, to perform at least as well as any parametric transformation currently available in the literature. 相似文献
13.
C. A. Glasbey 《Statistics and Computing》2009,19(1):49-56
Dynamic programming (DP) is a fast, elegant method for solving many one-dimensional optimisation problems but, unfortunately,
most problems in image analysis, such as restoration and warping, are two-dimensional. We consider three generalisations of
DP. The first is iterated dynamic programming (IDP), where DP is used to recursively solve each of a sequence of one-dimensional
problems in turn, to find a local optimum. A second algorithm is an empirical, stochastic optimiser, which is implemented
by adding progressively less noise to IDP. The final approach replaces DP by a more computationally intensive Forward-Backward
Gibbs Sampler, and uses a simulated annealing cooling schedule. Results are compared with existing pixel-by-pixel methods
of iterated conditional modes (ICM) and simulated annealing in two applications: to restore a synthetic aperture radar (SAR)
image, and to warp a pulsed-field electrophoresis gel into alignment with a reference image. We find that IDP and its stochastic
variant outperform the remaining algorithms. 相似文献
14.
Tamae Kawasaki 《统计学通讯:模拟与计算》2015,44(7):1850-1866
In this paper, we consider testing the equality of two mean vectors with unequal covariance matrices. In the case of equal covariance matrices, we can use Hotelling’s T2 statistic, which follows the F distribution under the null hypothesis. Meanwhile, in the case of unequal covariance matrices, the T2 type test statistic does not follow the F distribution, and it is also difficult to derive the exact distribution. In this paper, we propose an approximate solution to the problem by adjusting the degrees of freedom of the F distribution. Asymptotic expansions up to the term of order N? 2 for the first and second moments of the U statistic are given, where N is the total sample size minus two. A new approximate degrees of freedom and its bias correction are obtained. Finally, numerical comparison is presented by a Monte Carlo simulation. 相似文献
15.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness. 相似文献
16.
Christian H. Weiß 《Statistics and Computing》2011,21(1):1-16
Several procedures of sequential pattern analysis are designed to detect frequently occurring patterns in a single categorical time series (episode mining). Based on these frequent patterns, rules are generated and evaluated, for example, in terms of their confidence. The confidence value is commonly interpreted as an estimate of a conditional probability, so some kind of stochastic model has to be assumed. The model is identified as a variable length Markov model. With this assumption, the usual confidences are maximum likelihood estimates of the transition probabilities of the Markov model. We discuss possibilities of how to efficiently fit an appropriate model to the data. Based on this model, rules are formulated. It is demonstrated that this new approach generates noticeably less and more reliable rules. 相似文献
17.
Square contingency tables with the same row and column classification occur frequently in a wide range of statistical applications,
e.g. whenever the members of a matched pair are classified on the same scale, which is usually ordinal. Such tables are analysed
by choosing an appropriate loglinear model. We focus on the models of symmetry, triangular, diagonal and ordinal quasi symmetry.
The fit of a specific model is tested by the chi-squared test or the likelihood-ratio test, where p-values are calculated from the asymptotic chi-square distribution of the test statistic or, if this seems unjustified, from
the exact conditional distribution. Since the calculation of exact p-values is often not feasible, we propose alternatives based on algebraic statistics combined with MCMC methods. 相似文献
18.
Klaus T. Hess 《AStA Advances in Statistical Analysis》2009,93(2):221-233
It is well known that, for a multiplicative tariff with independent Poisson distributed claim numbers in the different tariff cells, the maximum-likelihood estimators of the parameters satisfy the marginal-sum equations. In the present paper we show that this is also true under the more general assumption that the claim numbers of the different cells arise from the decomposition of a collective model for the whole portfolio of risks. In this general setting, the claim numbers of the different cells need not be independent and need not be Poisson distributed. 相似文献
19.
Melinda H. Mccann 《统计学通讯:模拟与计算》2013,42(5):961-975
In this article, we consider the problem of constructing simultaneous confidence intervals for odds ratios in 2 × k classification tables with a fixed reference level. We discuss six methods designed to control the familywise error rate and investigate these methods in terms of simultaneous coverage probability and mean interval length. We illustrate the importance and the implementation of these methods using two {sc hiv} public health studies. 相似文献
20.
The profile likelihood of the reliability parameter θ = P(X < Y) or of the ratio of means, when X and Y are independent exponential random variables, has a simple analytical expression and is a powerful tool for making inferences. Inferences about θ can be given in terms of likelihood-confidence intervals with a simple algebraic structure even for small and unequal samples. The case of right censored data can also be handled in a simple way. This is in marked contrast with the complicated expressions that depend on cumbersome numerical calculations of multidimensional integrals required to obtain asymptotic confidence intervals that have been traditionally presented in scientific literature. 相似文献