首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let Xi be nonnegative independent random variables with finite expectations and . The value is what can be obtained by a “prophet”. A “mortal” on the other hand, may use k1 stopping rules t1,…,tk yielding a return E[maxi=1,…,kXti]. For nk the optimal return is where the supremum is over all stopping rules which stop by time n. The well known “prophet inequality” states that for all such Xi's and one choice and the constant “2” cannot be improved on for any n2. In contrast we show that for k=2 the best constant d satisfying for all such Xi's depends on n. On the way we obtain constants ck such that .  相似文献   

2.
We consider generalizations of projective Klingenberg and projective Hjelmslev planes, mainly (b, c)-K-structures. These are triples (φ, Π, Π′) where Π and Π′ are incidence structures and φ : Π → Π′ is an epimorphism which satisfies certain lifting axioms for double flags. The congruence relations of such triples are investigated, leading to nice factorizations of φ. Two integer invariants are associated with each congruence relation, generalizing a theorem of Kleinfeld on projective Hjelmslev planes. These invariants are completely characterized for a special class of triples: the balanced, minimally uniform neighbor cohesive (1,1)-K-structures. We show that a balanced neighbor cohesive (1,1)-K-structure Π “over” a PBIBD Π′ is again a PBIBD and compute its invariants. Several methods are given for constructing symmetric “divisible” PBIBD's on arbitrarily many classes.  相似文献   

3.
In this paper, we propose the application of group screening methods for analyzing data using E(fNOD)-optimal mixed-level supersaturated designs possessing the equal occurrence property. Supersaturated designs are a large class of factorial designs which can be used for screening out the important factors from a large set of potentially active variables. The huge advantage of these designs is that they reduce the experimental cost drastically, but their critical disadvantage is the high degree of confounding among factorial effects. Based on the idea of the group screening methods, the f factors are sub-divided into g “group-factors”. The “group-factors” are then studied using the penalized likelihood statistical analysis methods at a factorial design with orthogonal or near-orthogonal columns. All factors in groups found to have a large effect are then studied in a second stage of experiments. A comparison of the Type I and Type II error rates of various estimation methods via simulation experiments is performed. The results are presented in tables and discussion follows.  相似文献   

4.
Michael Brown   《Serials Review》2004,30(4):374-377
This installment of “They Might Be Giants” highlights a small online/print zine from New York called Pindeldyboz. Described by its editor Whitney Pastorek as a lit mag, Pindeldyboz publishes new fiction every week or so at the Web site but then publishes longer fiction once a year in the annual print zine of the same name. While this may indicate a split personality on the part of the editor/publisher, the goals of both the print and online versions are the same: to publish good fiction.  相似文献   

5.
Let X(1,n,m1,k),X(2,n,m2,k),…,X(n,n,m,k) be n generalized order statistics from a continuous distribution F which is strictly increasing over (a,b),−a<b, the support of F. Let g be an absolutely continuous and monotonically increasing function in (a,b) with finite g(a+),g(b) and E(g(X)). Then for some positive integer s,1<sn, we give characterization of distributions by means of
  相似文献   

6.
With this installment of “They Might Be Giants,” Michael Brown talks with Ben Brown (no relation) about publishing books (ten so far!) and magazines (issue 3 of Words! Words! Words! is about to be released), while at the same time working nine-to-five at a really boring tech job. It's a do-it-yourselfer's dream come true. Serials Review 2002; 28:344–348.  相似文献   

7.
Finite mixture models arise in a natural way in that they are modeling unobserved population heterogeneity. It is assumed that the population consists of an unknown number k of subpopulations with parameters λ1, ..., λk receiving weights p1, ..., pk. Because of the irregularity of the parameter space, the log-likelihood-ratio statistic (LRS) does not have a (χ2) limit distribution and therefore it is difficult to use the LRS to test for the number of components. These problems are circumvented by using the nonparametric bootstrap such that the mixture algorithm is applied B times to bootstrap samples obtained from the original sample with replacement. The number of components k is obtained as the mode of the bootstrap distribution of k. This approach is presented using the Times newspaper data and investigated in a simulation study for mixtures of Poisson data.  相似文献   

8.
In extending univariate outlier detection methods to higher dimension, various issues arise: limited visualization methods, inadequacy of marginal methods, lack of a natural order, limited parametric modeling, and, when using Mahalanobis distance, restriction to ellipsoidal contours. To address and overcome such limitations, we introduce nonparametric multivariate outlier identifiers based on multivariate depth functions, which can generate contours following the shape of the data set. Also, we study masking robustness, that is, robustness against misidentification of outliers as nonoutliers. In particular, we define a masking breakdown point (MBP), adapting to our setting certain ideas of Davies and Gather [1993. The identification of multiple outliers (with discussion). Journal of the American Statistical Association 88, 782–801] and Becker and Gather [1999. The masking breakdown point of multivariate outlier identification rules. Journal of the American Statistical Association 94, 947–955] based on the Mahalanobis distance outlyingness. We then compare four affine invariant outlier detection procedures, based on Mahalanobis distance, halfspace or Tukey depth, projection depth, and “Mahalanobis spatial” depth. For the goal of threshold type outlier detection, it is found that the Mahalanobis distance and projection procedures are distinctly superior in performance, each with very high MBP, while the halfspace approach is quite inferior. When a moderate MBP suffices, the Mahalanobis spatial procedure is competitive in view of its contours not constrained to be elliptical and its computational burden relatively mild. A small sampling experiment yields findings completely in accordance with the theoretical comparisons. While these four depth procedures are relatively comparable for the purpose of robust affine equivariant location estimation, the halfspace depth is not competitive with the others for the quite different goal of robust setting of an outlyingness threshold.  相似文献   

9.
Let Π1,…,Πk be k populations with Πi being Pareto with unknown scale parameter αi and known shape parameter βi;i=1,…,k. Suppose independent random samples (Xi1,…,Xin), i=1,…,k of equal size are drawn from each of k populations and let Xi denote the smallest observation of the ith sample. The population corresponding to the largest Xi is selected. We consider the problem of estimating the scale parameter of the selected population and obtain the uniformly minimum variance unbiased estimator (UMVUE) when the shape parameters are assumed to be equal. An admissible class of linear estimators is derived. Further, a general inadmissibility result for the scale equivariant estimators is proved.  相似文献   

10.
Summary: This paper studies the DDMA–chart, a data depth based moving–average control chart for monitoring multivariate data. This chart is nonparametric and it can detect simultaneously location and scale changes in the process. It improves upon the existing r– and Q–chart in the efficiency of detecting location changes. Both theoretical justifications and simulation studies are provided. Comparisons with some existing multivariate control charts via simulation results are also provided. Some applications of the DDMA–chart to the analysis of airline performance data (collected by the FAA) are demonstrated. The results indicate that the DDMA–chart is an effective nonparametric multivariate control chart.*Research supported in part by grants from the National Science Foundation, the National Security Agency, and the Federal Aviation Administration. The discussion on aviation safety in this paper reects the views of the authors, who are solely responsible for the accuracy of the analysis results presented herein, and does not necessarily reect the official view or policy of the FAA. The dataset used in this paper has been partially masked in order to protect confidentiality.  相似文献   

11.
12.
Suppose exponential populations π i with parameters (μi , σi ) (i = 1,2,…,k) are given. This article discusses how to select “good” populations in the sense of [Lam (1986 Lam, K. 1986. A new procedure for selecting good populations. Biometrika, 73(1): 201206. [Crossref], [Web of Science ®] [Google Scholar]). A new procedure for selecting good populations. Biometrika 73(1):201–206]. Depending on whether the σ i 's are known or unknown, several one-stage and a two-stage procedure of selection are proposed. The two-stage procedure can be replaced by a one-stage procedure if the second-stage sample is proved intangible. An attracting feature of these procedures is that they need no new statistical tables to implement.  相似文献   

13.
A co-editor of “The Balance Point” column looks back at its twenty-year history, its current function and its future in serving the serials professional and scholarly community. The author examines how the column emerged as an idea by then Serials Review editor Cindy Hepfer in 1988 to be a forum on important serials issues for practitioners who might not otherwise write formally on these topics. The column has continued though the 1990s and 2000s to provide that function, as well as serve as an important place where authors are invited to explore serial issues much in need of a balanced approach. The author shares comments from past “Balance Point” column editors, John Riddick, Mary Beth Clack, Ellen Finnie Duranceau, Karen Cargille, Markel Tumlin, and Kay Johnson on how they regarded the column, the rewards and challenges they faced, and how they see the future of this format in an evolving electronic communication milieu.  相似文献   

14.
We are considering the ABLUE’s – asymptotic best linear unbiased estimators – of the location parameter μ and the scale parameter σ of the population jointly based on a set of selected k sample quantiles, when the population distribution has the density of the form
where the standardized function f(u) being of a known functional form.A set of selected sample quantiles with a designated spacing
or in terms of u=(x−μ)/σ
where
λi=∫−∞uif(t) dt, i=1,2,…,k
are given by
x(n1)<x(n2)<<x(nk),
where
Asymptotic distribution of the k sample quantiles when n is very large is given by
h(x(n1),x(n2),…,x(nk);μ,σ)=(2πσ2)k/212−λ1)(λk−λk−1)(1−λk)]−1/2nk/2 exp(−nS/2σ2),
where
fi=f(ui), i=0,1,…,k,k+1,
f0=fk+1=0, λ0=0, λk+1=1.
The relative efficiency of the joint estimation is given by
where
and κ being independent of the spacing . The optimal spacing is the spacing which maximizes the relative efficiency η(μ,σ).We will prove the following rather remarkable theorem. Theorem. The optimal spacing for the joint estimation is symmetric, i.e.
λiki+1=1,
or
ui+uki+1=0, i=1,2,…,k,
if the standardized density f(u) of the population is differentiable infinitely many times and symmetric
f(−u)=f(u), f′(−u)=−f′(u).
  相似文献   

15.
Consider a parallel system with n independent components. Assume that the lifetime of the jth component follows an exponential distribution with a constant but unknown parameter λj, 1≤jn. We test rj components of type-j for failure and compute the total time Tj of rj failures for the jth component. Based on T=(T1,T2,…,Tn) and r=(r1,r2,…,rn), we derive optimal reliability test plans which ensure the usual probability requirements on system reliability. Further, we solve the associated nonlinear integer programming problem by a simple enumeration of integers over the feasible range. An algorithm is developed to obtain integer solutions with minimum cost. Finally, some examples have been discussed for various levels of producer’s and consumer’s risk to illustrate the approach. Our optimal plans lead to considerable savings in costs over the available plans in the literature.  相似文献   

16.
We consider here a generalization of the skew-normal distribution, GSN(λ1,λ2,ρ), defined through a standard bivariate normal distribution with correlation ρ, which is a special case of the unified multivariate skew-normal distribution studied recently by Arellano-Valle and Azzalini [2006. On the unification of families of skew-normal distributions. Scand. J. Statist. 33, 561–574]. We then present some simple and useful properties of this distribution and also derive its moment generating function in an explicit form. Next, we show that distributions of order statistics from the trivariate normal distribution are mixtures of these generalized skew-normal distributions; thence, using the established properties of the generalized skew-normal distribution, we derive the moment generating functions of order statistics, and also present expressions for means and variances of these order statistics.Next, we introduce a generalized skew-tν distribution, which is a special case of the unified multivariate skew-elliptical distribution presented by Arellano-Valle and Azzalini [2006. On the unification of families of skew-normal distributions. Scand. J. Statist. 33, 561–574] and is in fact a three-parameter generalization of Azzalini and Capitanio's [2003. Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t distribution. J. Roy. Statist. Soc. Ser. B 65, 367–389] univariate skew-tν form. We then use the relationship between the generalized skew-normal and skew-tν distributions to discuss some properties of generalized skew-tν as well as distributions of order statistics from bivariate and trivariate tν distributions. We show that these distributions of order statistics are indeed mixtures of generalized skew-tν distributions, and then use this property to derive explicit expressions for means and variances of these order statistics.  相似文献   

17.
18.
In July 2004, Cindy Hepfer asked friends and colleagues: “What question would you like to ask Clifford Lynch if you had the chance?” As a result, Clifford Lynch discusses a wide variety of topics and issues impacting the serials community from Open Access, institutional repositories, what we can learn from Google and Amazon, and Shibboleth to where his favorite places are to travel and how he prepares for presentations.  相似文献   

19.
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (pBH) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that pBH are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.  相似文献   

20.
The standard error of the maximum-likelihood estimator for 1/μ based on a random sample of size N from the normal distribution N(μ,σ2) is infinite. This could be considered to be a disadvantage.Another disadvantage is that the bias of the estimator is undefined if the integral is interpreted in the usual sense as a Lebesgue integral. It is shown here that the integral expression for the bias can be interpreted in the sense given by the Schwartz theory of generalized functions. Furthermore, an explicit closed form expression in terms of the complex error function is derived. It is also proven that unbiased estimation of 1/μ is impossible.Further results on the maximum-likelihood estimator are investigated, including closed form expressions for the generalized moments and corresponding complete asymptotic expansions. It is observed that the problem can be reduced to a one-parameter problem depending only on , and this holds also for more general location-scale problems. The parameter can be interpreted as a shape parameter for the distribution of the maximum-likelihood estimator.An alternative estimator is suggested motivated by the asymptotic expansion for the bias, and it is argued that the suggested estimator is an improvement. The method used for the construction of the estimator is simple and generalizes to other parametric families.The problem leads to a rediscovery of a generalized mathematical expectation introduced originally by Kolmogorov [1933. Foundations of the Theory of Probability, second ed. Chelsea Publishing Company (1956)]. A brief discussion of this, and some related integrals, is provided. It is in particular argued that the principal value expectation provides a reasonable location parameter in cases where it exists. This does not hold generally for expectations interpreted in the sense given by the Schwartz theory of generalized functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号