首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For estimating an unknown parameter θ, we introduce and motivate the use of balanced loss functions of the form Lr, w, d0(q, d)=wr(d0, d)+ (1-w) r(q, d){L_{\rho, \omega, \delta_0}(\theta, \delta)=\omega \rho(\delta_0, \delta)+ (1-\omega) \rho(\theta, \delta)}, as well as the weighted version q(q) Lr, w, d0(q, d){q(\theta) L_{\rho, \omega, \delta_0}(\theta, \delta)}, where ρ(θ, δ) is an arbitrary loss function, δ 0 is a chosen a priori “target” estimator of q, w ? [0,1){\theta, \omega \in[0,1)}, and q(·) is a positive weight function. we develop Bayesian estimators under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω > 0 by relating such estimators to Bayesian solutions under Lr, w, d0{L_{\rho, \omega, \delta_0}} with ω = 0. Illustrations are given for various choices of ρ, such as absolute value, entropy, linex, and squared error type losses. Finally, under various robust Bayesian analysis criteria including posterior regret gamma-minimaxity, conditional gamma-minimaxity, and most stable, we establish explicit connections between optimal actions derived under balanced and unbalanced losses.  相似文献   

2.
There are many situations where the usual random sample from a population of interest is not available, due to the data having unequal probabilities of entering the sample. The method of weighted distributions models this ascertainment bias by adjusting the probabilities of actual occurrence of events to arrive at a specification of the probabilities of the events as observed and recorded. We consider two different classes of contaminated or mixture of weight functions, Γ a ={w(x):w(x)=(1−ε)w 0(x)+εq(x),qQ} and Γ g ={w(x):w(x)=w 0 1−ε (x)q ε(x),qQ} wherew 0(x) is the elicited weighted function,Q is a class of positive functions and 0≤ε≤1 is a small number. Also, we study the local variation of ϕ-divergence over classes Γ a and Γ g . We devote on measuring robustness using divergence measures which is based on the Bayesian approach. Two examples will be studied.  相似文献   

3.
Methods: Based on the index S (S = SENSITIVITY (SEN) × SPECIFICITY (SPE)), the new weighted product index Sw is defined as Sw = (SEN)2w × (SPE)2(1-w), where (0≤w≤1). The Sw is developed to be a new tool to select the optimal cut point in ROC analysis and be compared with the other two commonly used criteria.

Results: Comparing the optimal cut point for the three criteria, the wave range of the optimal cut point for the maximized weighted Youden index criterion is the widest, the weighted closest-to-(0,1) criterion is the narrowest and the weighted product index Sw criterion lays between the ranges of the two criteria.  相似文献   


4.
The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.  相似文献   

5.
The aim of this study is to assign weights w 1, …, w m to m clustering variables Z 1, …, Z m , so that k groups were uncovered to reveal more meaningful within-group coherence. We propose a new criterion to be minimized, which is the sum of the weighted within-cluster sums of squares and the penalty for the heterogeneity in variable weights w 1, …, w m . We will present the computing algorithm for such k-means clustering, a working procedure to determine a suitable value of penalty constant and numerical examples, among which one is simulated and the other two are real.  相似文献   

6.
Background: On the basis of statistical methods about index S (S = SEN × SPE), we develop a new weighted ways (weighted product index Sw) of combining sensitivity and specificity with user-defined weights. Methods: The new weighted product index Sw is defined as Sw = (SEN) (Youden 1950)2w × (SPE) (Youden 1950) 2(1?w) Results: For the large sample, the test statistics Z of two-independent-sample weighted product indices can either be a monotonous increasing/decreasing function or a no-monotonous function of weight w. Type I error of this statistics can be guaranteed close to the nominal level of 5%, which is more conservative than the weighted Youden index from simulation.  相似文献   

7.
8.
Consider a linear regression model with unknown regression parameters β0 and independent errors of unknown distribution. Block the observations into q groups whose independent variables have a common value and measure the homogeneity of the blocks of residuals by a Cramér‐von Mises q‐sample statistic Tq(β). This statistic is designed so that its expected value as a function of the chosen regression parameter β has a minimum value of zero precisely at the true value β0. The minimizer β of Tq(β) over all β is shown to be a consistent estimate of β0. It is also shown that the bootstrap distribution of Tq0) can be used to do a lack of fit test of the regression model and to construct a confidence region for β0  相似文献   

9.
LetX be a random variable andX (w) be a weighted random variable corresponding toX. In this paper, we intend to characterize the Pearson system of distributions by a relationship between reliability measures ofX andX (w), for some weight functionw>0.  相似文献   

10.
11.
In the receiver operating characteristic (ROC) analysis, the area under the ROC curve (AUC ) serves as an overall measure of diagnostic accuracy. Another popular ROC index is the Youden index (J ), which corresponds to the maximum sum of sensitivity and specificity minus one. Since the AUC and J describe different aspects of diagnostic performance, we propose to test if a biomarker beats the pre-specified targeting values of AUC0 and J0 simultaneously with H0 : AUCAUC0 or JJ0 against Ha : AUC > AUC0 and J > J0 . This is a multivariate order restrictive hypothesis with a non-convex space in Ha , and traditional likelihood ratio-based tests cannot apply. The intersection–union test (IUT) and the joint test are proposed for such test. While the IUT test independently tests for the AUC and the Youden index, the joint test is constructed based on the joint confidence region. Findings from the simulation suggest both tests yield similar power estimates. We also illustrated the tests using a real data example and the results of both tests are consistent. In conclusion, testing jointly on AUC and J gives more reliable results than using a single index, and the IUT is easy to apply and have similar power as the joint test.  相似文献   

12.
We consider the situation where one wants to maximise a functionf(θ,x) with respect tox, with θ unknown and estimated from observationsy k . This may correspond to the case of a regression model, where one observesy k =f(θ,x k )+ε k , with ε k some random error, or to the Bernoulli case wherey k ∈{0, 1}, with Pr[y k =1|θ,x k |=f(θ,x k ). Special attention is given to sequences given by , with an estimated value of θ obtained from (x1, y1),...,(x k ,y k ) andd k (x) a penalty for poor estimation. Approximately optimal rules are suggested in the linear regression case with a finite horizon, where one wants to maximize ∑ i=1 N w i f(θ, x i ) with {w i } a weighting sequence. Various examples are presented, with a comparison with a Polya urn design and an up-and-down method for a binary response problem.  相似文献   

13.
This article compares two recently proposed test statistics for unobserved cluster effects (C, SSR w ) with three statistics frequently mentioned in panel econometrics (BP, SLM, F). Simulations include data generating processes with a cluster-level explanatory variable, scenarios with unequally sized clusters, processes that have an incorrectly specified cluster structure, and processes that have no cluster structure but rather spatial correlation. All but the F test exhibit small-sample deviation from the asymptotic distribution. The SLM, F, and SSR w tests show equivalent power when cluster sizes are balanced. SLM has greatest power when cluster sizes are unbalanced.  相似文献   

14.
Suppose it is desired to obtain a large number Ns of items for which individual counting is impractical, but one can demand a batch to weigh at least w units so that the number of items N in the batch may be close to the desired number Ns. If the items have mean weight ωTH, it is reasonable to have w equal to ωTHNs when ωTH is known. When ωTH is unknown, one can take a sample of size n, not bigger than Ns, estimate ωTH by a good estimator ωn, and set w equal to ωnNs. Let Rn = Kp2N2s/n + Ksn be a measure of loss, where Ke and Ks are the coefficients representing the cost of the error in estimation and the cost of the sampling respectively, and p is the coefficient of variation for the weight of the items. If one determines the sample size to be the integer closest to pCNs when p is known, where C is (Ke/Ks)1/2, then Rn will be minimized. If p is unknown, a simple sequential procedure is proposed for which the average sample number is shown to be asymptotically equal to the optimal fixed sample size. When the weights are assumed to have a gamma distribution given ω and ω has a prior inverted gamma distribution, the optimal sample size can be found to be the nonnegative integer closest to pCNs + p2A(pC – 1), where A is a known constant given in the prior distribution.  相似文献   

15.
The generalized AR(1) process y t = a t y t-1+ v t is considered, where the parameter a t follows the AR(1) process a t = Ga t-1+ w t.Assuming that V t and w t are Gaussian and independent, the first six exact predictors for future values of y t are derived. These exact predictors are compared with Box-Jenkins -type approximations. MACSYMA, a computer algebra program, is utilized in the derivation of the predictors.  相似文献   

16.
Let X1,…,Xn be some i.i.d. observations from a heavy-tailed distribution F, i.e. the common distribution of the excesses over a high threshold un can be approximated by a generalized Pareto distribution Gγ,σn with γ>0. This paper deals with the problem of finding confidence regions for the couple (γ,σn): combining the empirical likelihood methodology with estimation equations (close but not identical to the likelihood equations) introduced by Zhang (2007), asymptotically valid confidence regions for (γ,σn) are obtained and proved to perform better than Wald-type confidence regions (especially those derived from the asymptotic normality of the maximum likelihood estimators). By profiling out the scale parameter, confidence intervals for the tail index are also derived.  相似文献   

17.
18.
In experiments, the classical (ANOVA) F-test is often used to test the omnibus null-hypothesis μ1 = μ2 ... = μ j = ... = μ n (all n population means are equal) in a one-way ANOVA design, even when one or more basic assumptions are being violated. In the first part of this article, we will briefly discuss the consequences of the different types of violations of the basic assumptions (dependent measurements, non-normality, heteroscedasticity) on the validity of the F-test. Secondly, we will present a simulation experiment, designed to compare the type I-error and power properties of both the F-test and some of its parametric adaptations: the Brown & Forsythe F*-test and Welch’s Vw-test. It is concluded that the Welch Vw-test offers acceptable control over the type I-error rate in combination with (very) high power in most of the experimental conditions. Therefore, its use is highly recommended when one or more basic assumptions are being violated. In general, the use of the Brown & Forsythe F*-test cannot be recommended on power considerations unless the design is balanced and the homoscedasticity assumption holds.  相似文献   

19.
Consider K(>2) independent populations π1,..,π k such that observations obtained from π k are independent and normally distributed with unknown mean µ i and unknown variance θ i i = 1,…,k. In this paper, we provide lower percentage points of Hartley's extremal quotient statistic for testing an interval hypothesisH 0 θ [k] θ [k] > δ vs. H a : θ [k] θ [1] ≤ δ , where δ ≥ 1 is a predetermined constant and θ [k](θ [1]) is the max (min) of the θi,…,θ k . The least favorable configuration (LFC) for the test under H 0 is determined in order to obtain the lower percentage points. These percentage points can also be used to construct an upper confidence bound for θ[k][1].  相似文献   

20.
Abstract

Satten et al. [Satten, G. A., Datta, S., Robins, J. M. (2001). Estimating the marginal survival function in the presence of time dependent covariates. Statis. Prob. Lett. 54: 397--403] proposed an estimator [denoted by ?(t)] of survival function of failure times that is in the class of survival function estimators proposed by Robins [Robins, J. M. (1993). Information recovery and bias adjustment in proportional hazards regression analysis of randomized trials using surrogate markers. In: Proceedings of the American Statistical Association-Biopharmaceutical Section. Alexandria, VA: ASA, pp. 24--33]. The estimator is appropriate when data are subject to dependent censoring. In this article, it is demonstrated that the estimator ?(t) can be extended to estimate the survival function when data are subject to dependent censoring and left truncation. In addition, we propose an alternative estimator of survival function [denoted by ? w (t)] that is represented as an inverse-probability-weighted average Satten and Datta [Satten, G. A., Datta, S. (2001). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. Ass. 55: 207--210]. Simulation results show that when truncation is not severe the mean squared error of ?(t) is smaller than that of ? w (t), except for the case when censoring is light. However, when truncation is severe, ? w (t) has the advantage of less bias and the situation can be reversed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号