首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For raw optical density (ROD) data, such as those generated in biological assays employing an ELISA plate reader, EDp-optimal designs are identified for a family of homogeneous non-linear models with two parameters. In every case, the theoretical EDp-optimal design is a design with one or two support points. These theoretical optimal designs might not be suitable for many practical applications. To overcome this shortcoming, we have specified EDp-optimal designs within the class of k-point equally spaced and uniform designs. The efficiency robustness of these designs with respect to initial nominal values of the parameters have been investigated.  相似文献   

2.
In this paper, a multivariate form of truncated generalized Cauchy distribution (TGCD), which is denoted by (MVTGCD), is introduced. The joint density function, conditional density function, moment generating function and mixed moments of order ${b=\sum_{i=1}^{k}b_{i}}$ are obtained. Making use of the mixed moments formula, skewness and kurtosis in case of the bivariate case are obtained. Also, all parameters of the distribution are estimated using the maximum likelihood and Bayes methods. A real data set is introduced and analyzed using three models. The first model is the bivariate Cauchy distribution, the second is the truncated bivariate Cauchy distribution and the third is the bivariate truncated generalized Cauchy distribution. A comparison is carried out between the mentioned models based on the corresponding Kolmogorov–Smirnov (K–S) test statistic to emphasize that the bivariate truncated generalized Cauchy model fits the data better than the other models.  相似文献   

3.
We find optimal designs for linear models using a novel algorithm that iteratively combines a semidefinite programming (SDP) approach with adaptive grid techniques. The proposed algorithm is also adapted to find locally optimal designs for nonlinear models. The search space is first discretized, and SDP is applied to find the optimal design based on the initial grid. The points in the next grid set are points that maximize the dispersion function of the SDP-generated optimal design using nonlinear programming. The procedure is repeated until a user-specified stopping rule is reached. The proposed algorithm is broadly applicable, and we demonstrate its flexibility using (i) models with one or more variables and (ii) differentiable design criteria, such as A-, D-optimality, and non-differentiable criterion like E-optimality, including the mathematically more challenging case when the minimum eigenvalue of the information matrix of the optimal design has geometric multiplicity larger than 1. Our algorithm is computationally efficient because it is based on mathematical programming tools and so optimality is assured at each stage; it also exploits the convexity of the problems whenever possible. Using several linear and nonlinear models with one or more factors, we show the proposed algorithm can efficiently find optimal designs.  相似文献   

4.
In this paper, we will investigate the nonparametric estimation of the distribution function F of an absolutely continuous random variable. Two methods are analyzed: the first one based on the empirical distribution function, expressed in terms of i.i.d. lattice random variables and, secondly, the kernel method, which involves nonlattice random vectors dependent on the sample size n; this latter procedure produces a smooth distribution estimator that will be explicitly corrected to reduce the effect of bias or variance. For both methods, the non-Studentized and Studentized statistics are considered as well as their bootstrap counterparts and asymptotic expansions are constructed to approximate their distribution functions via the Edgeworth expansion techniques. On this basis, we will obtain confidence intervals for F(x) and state the coverage error order achieved in each case.  相似文献   

5.
We present a new generalized family of skew two-piece skew-elliptical (GSTPSE) models and derive some its statistical properties. It is shown that the new family of distribution may be written as a mixture of generalized skew elliptical distributions. Also, a new representation theorem for a special case of GSTPSE-distribution is given. Next, we will focus on t kernel density and prove that it is a scale mixture of the generalized skew two-piece skew normal distributions. An explicit expression for the central moments as well as a recurrence relations for its cumulative distribution function and density are obtained. Since, this special case is a uni-/bimodal distribution, a sufficient condition for each cases is given. A real data set on heights of Australian females athletes is analysed. Finally, some concluding remarks and open problems are discussed.  相似文献   

6.
Starting from Milbrodt (1985), the asymptotic behaviour of experiments associated with Poisson sampling, Rejective sampling and its Sampford-Durbin modification is investigated. As superpopulation models so-called Lr-generated regression parameter families (1⩽r⩽2) are considered, allowing also the presence of nuisance parameters. Under some assumptions on the first order probabilities of inclusion it can be shown that the sampling experiments converge weakly if the underlying shift parameter families do so. In case of convergence the limit of the sampling experiments is characterized in terms of its Hellinger transforms and its Lévy-Khintchine representation, leading to criteria for the limit to be a pure Gaussian or a pure Poisson experiment respectively. These results are then applied to the situation of sampling in the presence of random non-response, and to establish local asymptotic normality (LAN) under more restrictive conditions. Applications also include asymptotic optimality properties of tests based on Horvitz-Thompson-type statistics, and LAM bounds and criteria for adaptivity, when testing or estimating a continuous linear functional in LAN situations. They especially cover the case of sampling from an unknown symmetric distribution, which has been subject to detailed investigations in the i.i.d. case.  相似文献   

7.
In this note we propose a new and novel kernel density estimator for directly estimating the probability and cumulative distribution function of an L-estimate from a single population based on utilizing the theory in Knight (1985) in conjunction with classic inversion theory. This idea is further developed for a kernel density estimator for the difference of L-estimates from two independent populations. The methodology is developed via a “plug-in” approach, but it is distinct from the classic bootstrap methodology in that it is analytically and computationally feasible to provide an exact estimate of the distribution function and thus eliminates the resampling related error. The asymptotic and finite sample properties of our estimators are examined. The procedure is illustrated via generating the kernel density estimate for the Tukey's trimean from a small data set.  相似文献   

8.
In this paper, we obtain minimax and near-minimax nonrandomized decision rules under zero–one loss for a restricted location parameter of an absolutely continuous distribution. Two types of rules are addressed: monotone and nonmonotone. A complete-class theorem is proved for the monotone case. This theorem extends the previous work of Zeytinoglu and Mintz (1984) to the case of 2e-MLR sampling distributions. A class of continuous monotone nondecreasing rules is defined. This class contains the monotone minimax rules developed in this paper. It is shown that each rule in this class is Bayes with respect to nondenumerably many priors. A procedure for generating these priors is presented. Nonmonotone near-minimax almost-equalizer rules are derived for problems characterized by non-2e-MLR distributions. The derivation is based on the evaluation of a distribution-dependent function Qc. The methodological importance of this function is that it is used to unify the discrete- and continuous-parameter problems, and to obtain a lower bound on the minimax risk for the non-2e-MLR case.  相似文献   

9.
It is already known that the convolution of a bounded density with itself can be estimated at the root-n rate using the two asymptotically equivalent kernel estimators: (i) Frees estimator ( Frees, 1994) and (ii) Saavedra and Cao estimator ( Saavedra and Cao, 2000). In this work, we investigate the efficiency of these estimators of the convolution of a bounded density. The efficiency criterion used in this work is that of a least dispersed regular estimator described in Begun et al. (1983). This concept is based on the Hájek–Le Cam convolution theorem for locally asymptotically normal (LAN) families.  相似文献   

10.
For ergodic ARCH processes, we introduce a one-parameter family of Lp-estimators. The construction is based on the concept of weighted M-estimators. Under weak assumptions on the error distribution, the consistency is established. The asymptotic normality is proved for the special cases p=1 and 2. To prove the asymptotic normality of the L1-estimator, one needs the existence of a density of the squares of the errors, whereas for the L2-estimator the existence of fourth moments is assumed. The asymptotic covariance matrix of the estimator depends on the unknown parameter which can be substituted by consistent estimators. For the L1-estimator we construct a kernel estimator for the unknown density of the square of the errors.  相似文献   

11.
A unified definition of maximum likelihood (ml) is given. It is based on a pairwise comparison of probability measures near the observed data point. This definition does not suffer from the usual inadequacies of earlier definitions, i.e., it does not depend on the choice of a density version in the dominated case. The definition covers the undominated case as well, i.e., it provides a consistent approach to nonparametric ml problems, which heretofore have been solved on a more less ad hoc basis. It is shown that the new ml definition is a true extension of the classical ml approach, as it is practiced in the dominated case. Hence the classical methodology can simply be subsumed. Parametric and nonparametric examples are discussed.  相似文献   

12.
This paper presents the trace of the covariance matrix of the estimates of effects based on a fractional 2m factorial (2m-FF) design T of resolution V for the following two cases: One is the case where T is constructed by adding some restricted assemblies to an orthogonal array. The other is one where T is constructed by removing some restricted assemblies from an orthogonal array of index unity. In the class of 2m-FF designs of resolution V considered here, optimal designs with respect to the trace criterion, i.e. A-optimal, are presented for m = 4, 5, and 6 and for a range of practical values of N (the total number of assemblies). Some of them are better than the corresponding A-optimal designs in the class of balanced fractional 2m factorial designs of resolution V obtained by Srivastava and Chopra (1971b) in such a sense that the trace of the covariance matrix of the estimates is small.  相似文献   

13.
In this paper, we consider a mixed compound Poisson process, that is, a random sum of independent and identically distributed (i.i.d.) random variables where the number of terms is a Poisson process with random intensity. We study nonparametric estimators of the jump density by specific deconvolution methods. Firstly, assuming that the random intensity has exponential distribution with unknown expectation, we propose two types of estimators based on the observation of an i.i.d. sample. Risks bounds and adaptive procedures are provided. Then, with no assumption on the distribution of the random intensity, we propose two non‐parametric estimators of the jump density based on the joint observation of the number of jumps and the random sum of jumps. Risks bounds are provided, leading to unusual rates for one of the two estimators. The methods are implemented and compared via simulations.  相似文献   

14.
We consider the problem of determining sharp upper bounds on the expected values of non-extreme order statistics based on i.i.d. random variables taking on N values at most. We show that the bound problem is equivalent to the problem of establishing the best approximation of the projection of the density function of the respective order statistic based on the standard uniform i.i.d. sample onto the family of non-decreasing functions by arbitrary N  -valued functions in the norm of L2(0,1)L2(0,1) space. We also present an algorithm converging to the local minima of the approximation problems.  相似文献   

15.
The basic assumption underlying the concept of ranked set sampling is that actual measurement of units is expensive, whereas ranking is cheap. This may not be true in reality in certain cases where ranking may be moderately expensive. In such situations, based on total cost considerations, k-tuple ranked set sampling is known to be a viable alternative, where one selects k units (instead of one) from each ranked set. In this article, we consider estimation of the distribution function based on k-tuple ranked set samples when the cost of selecting and ranking units is not ignorable. We investigate estimation both in the balanced and unbalanced data case. Properties of the estimation procedure in the presence of ranking error are also investigated. Results of simulation studies as well as an application to a real data set are presented to illustrate some of the theoretical findings.  相似文献   

16.
Kernel density estimation for multivariate, circular data has been formulated only when the sample space is the sphere, but theory for the torus would also be useful. For data lying on a d-dimensional torus (d?1), we discuss kernel estimation of a density, its mixed partial derivatives, and their squared functionals. We introduce a specific class of product kernels whose order is suitably defined in such a way to obtain L2-risk formulas whose structure can be compared to their Euclidean counterparts. Our kernels are based on circular densities; however, we also discuss smaller bias estimation involving negative kernels which are functions of circular densities. Practical rules for selecting the smoothing degree, based on cross-validation, bootstrap and plug-in ideas are derived. Moreover, we provide specific results on the use of kernels based on the von Mises density. Finally, real-data examples and simulation studies illustrate the findings.  相似文献   

17.
Various approximate methods have been proposed for obtaining a two-tailed confidence interval for the ratio R of two proportions (independent samples). This paper evaluates 73 different methods (64 of which are new methods or modifications of older methods) and concludes that: (1) none of the classic methods (including the well-known score method) is acceptable since they are too liberal; (2), the best of the classic methods is the one based on logarithmic transformation (after increasing the data by 0.5), but it is only valid for large samples and moderate values of R; (3) the best methods among the 73 methods is based on an approximation to the score method (after adding 0.5 to all the data), with the added advantage of obtaining the interval by a simple method (i.e. solving a second degree equation); and (4) an option that is simpler than the previous one, and which is almost as effective for moderate values of R, consists of applying the classic Wald method (after adding a quantity to the data which is usually $z_{\alpha /2}^{2}/4$ ).  相似文献   

18.
Longitudinal data analysis in epidemiological settings is complicated by large multiplicities of short time series and the occurrence of missing observations. To handle such difficulties Rosner & Muñoz (1988) developed a weighted non-linear least squares algorithm for estimating parameters for first-order autoregressive (AR1) processes with time-varying covariates. This method proved efficient when compared to complete case procedures. Here that work is extended by (1) introducing a different estimation procedure based on the EM algorithm, and (2) formulating estimation techniques for second-order autoregressive models. The second development is important because some of the intended areas of application (adult pulmonary function decline, childhood blood pressure) have autocorrelation functions which decay more slowly than the geometric rate imposed by an AR1 model. Simulation studies are used to compare the three methodologies (non-linear, EM based and complete case) with respect to bias, efficiency and coverage both in the presence and in the absence of time-varying covariates. Differing degrees and mechanisms of missingness are examined. Preliminary results indicate the non-linear approach to be the method of choice: it has high efficiency and is easily implemented. An illustrative example concerning pulmonary function decline in the Netherlands is analyzed using this method.  相似文献   

19.
We investigate the problem of estimating a smooth invertible transformation f when observing independent samples X1,…,XnP°f where P is a known measure. We focus on the two-dimensional case where P and f are defined on R2. We present a flexible class of smooth invertible transformations in two dimensions with variational equations for optimizing over the classes, then study the problem of estimating the transformation f by penalized maximum likelihood estimation. We apply our methodology to the case when P°f has a density with respect to Lebesgue measure on R2 and demonstrate improvements over kernel density estimation on three examples.  相似文献   

20.
Parametric and permutation testing for multivariate monotonic alternatives   总被引:1,自引:0,他引:1  
We are firstly interested in testing the homogeneity of k mean vectors against two-sided restricted alternatives separately in multivariate normal distributions. This problem is a multivariate extension of Bartholomew (in Biometrica 46:328–335, 1959b) and an extension of Sasabuchi et al. (in Biometrica 70:465–472, 1983) and Kulatunga and Sasabuchi (in Mem. Fac. Sci., Kyushu Univ. Ser. A: Mathematica 38:151–161, 1984) to two-sided ordered hypotheses. We examine the problem of testing under two separate cases. One case is that covariance matrices are known, the other one is that covariance matrices are unknown but common. For the general case that covariance matrices are known the test statistic is obtained using the likelihood ratio method. When the known covariance matrices are common and diagonal, the null distribution of test statistic is derived and its critical values are computed at different significance levels. A Monte Carlo study is also presented to estimate the power of the test. A test statistic is proposed for the case when the common covariance matrices are unknown. Since it is difficult to compute the exact p-value for this problem of testing with the classical method when the covariance matrices are completely unknown, we first present a reformulation of the test statistic based on the orthogonal projections on the closed convex cones and then determine the upper bounds for its p-values. Also we provide a general nonparametric solution based on the permutation approach and nonparametric combination of dependent tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号