首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 422 毫秒
1.
It was a major breakthrough when design-based stereological methods for vertical sections were developed by Adrian Baddeley and coworkers in the 1980s. Most importantly, it was shown how to estimate in a design-based fashion surface area from observations in random vertical sections with uniform position and uniform rotation around the vertical axis. The great practical importance of these developments is due to the fact that some biostructures can only be recognised on vertical sections. Later, local design-based estimation of mean particle volume from vertical sections was developed. In the present paper, we review these important advances in stereology. Quite recently, vertical sections have gained renewed interest, since it has been shown that mean particle shape can be estimated from such sections. These new developments are also reviewed in the present paper.  相似文献   

2.
Uniform stochastic orderings of random variables are expressed as total positivity (TP) of density, survival, and distribution functions. The orderings are called uniform because each is a stochastic order that persists under conditioning to a family of intervals—for example, the family consisting of all intervals of the form (-∞,x]. This paper is concerned with the preservation of uniform stochastic ordering under convolution, mixing, and the formation of coherent systems. A general TP2 result involving preservation of total positivity under integration is presented and applied to convolutions and mixtures of distribution and survival functions. Log-concavity of distribution, survival, and density functions characterizes distributions that preserve the various orderings under convolution. Likewise, distributions that preserve orderings under mixing are characterized by TP2 distribution and survival functions.  相似文献   

3.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   

4.
Tim Fischer  Udo Kamps 《Statistics》2013,47(1):142-158
There are several well-known mappings which transform the first r common order statistics in a sample of size n from a standard uniform distribution to a full vector of dimension r of order statistics in a sample of size r from a uniform distribution. Continuing the results reported in a previous paper by the authors, it is shown that transformations of these types do not lead to order statistics from an i.i.d. sample of random variables, in general, when being applied to order statistics from non-uniform distributions. By accepting the loss of one dimension, a structure-preserving transformation exists for power function distributions.  相似文献   

5.
We propose a new stochastic approximation (SA) algorithm for maximum-likelihood estimation (MLE) in the incomplete-data setting. This algorithm is most useful for problems when the EM algorithm is not possible due to an intractable E-step or M-step. Compared to other algorithm that have been proposed for intractable EM problems, such as the MCEM algorithm of Wei and Tanner (1990), our proposed algorithm appears more generally applicable and efficient. The approach we adopt is inspired by the Robbins-Monro (1951) stochastic approximation procedure, and we show that the proposed algorithm can be used to solve some of the long-standing problems in computing an MLE with incomplete data. We prove that in general O(n) simulation steps are required in computing the MLE with the SA algorithm and O(n log n) simulation steps are required in computing the MLE using the MCEM and/or the MCNR algorithm, where n is the sample size of the observations. Examples include computing the MLE in the nonlinear error-in-variable model and nonlinear regression model with random effects.  相似文献   

6.
This paper introduces a skewed log-Birnbaum–Saunders regression model based on the skewed sinh-normal distribution proposed by Leiva et al. [A skewed sinh-normal distribution and its properties and application to air pollution, Comm. Statist. Theory Methods 39 (2010), pp. 426–443]. Some influence methods, such as the local influence and generalized leverage, are presented. Additionally, we derived the normal curvatures of local influence under some perturbation schemes. An empirical application to a real data set is presented in order to illustrate the usefulness of the proposed model.  相似文献   

7.
We introduce a uniform generalized order statistic process. It is a simple Markov process whose initial segment can be identified with a set of uniform generalized order statistics. A standard marginal transformation leads to a generalized order statistic process related to non-uniform generalized order statistics. It is then demonstrated that the nth variable in such a process has the same distribution as an nth Pfeifer record value. This process representation of Pfeifer records facilitates discussion of the possible limit laws for Pfeifer records and, in some cases, of sums thereof. Because of the close relationship between Pfeifer records and generalized order statistics, the results shed some light on the problem of determining the nature of the possible limiting distributions of the largest generalized order statistic.  相似文献   

8.
This paper deals with the estimation of the error distribution function in a varying coefficient regression model. We propose two estimators and study their asymptotic properties by obtaining uniform stochastic expansions. The first estimator is a residual-based empirical distribution function. We study this estimator when the varying coefficients are estimated by under-smoothed local quadratic smoothers. Our second estimator which exploits the fact that the error distribution has mean zero is a weighted residual-based empirical distribution whose weights are chosen to achieve the mean zero property using empirical likelihood methods. The second estimator improves on the first estimator. Bootstrap confidence bands based on the two estimators are also discussed.  相似文献   

9.
Uniform scores test is a rank-based method that tests the homogeneity of k-populations in circular data problems. The influence of ties on the uniform scores test has been emphasized by several authors in several articles and books. Moreover, it is suggested that the uniform scores test should be used with caution if ties are present in the data. This paper investigates the influence of ties on the uniform scores test by computing the power of the test using average, randomization, permutation, minimum, and maximum methods to break ties. Monte Carlo simulation is performed to compute the power of the test under several scenarios such as having 5% or 10% of ties and tie group structures in the data. The simulation study shows no significant difference among the methods under the existence of ties but the test loses its power when there are many ties or complicated group structures. Thus, randomization or average methods are equally powerful to break ties when applying uniform scores test. Also, it can be concluded that k-sample uniform scores test can be used safely without sacrificing the power if there are only less than 5% of ties or at most two groups of a few ties.  相似文献   

10.
In this paper, an alternative skew Student-t family of distributions is studied. It is obtained as an extension of the generalized Student-t (GS-t) family introduced by McDonald and Newey [10]. The extension that is obtained can be seen as a reparametrization of the skewed GS-t distribution considered by Theodossiou [14]. A key element in the construction of such an extension is that it can be stochastically represented as a mixture of an epsilon-skew-power-exponential distribution [1] and a generalized-gamma distribution. From this representation, we can readily derive theoretical properties and easy-to-implement simulation schemes. Furthermore, we study some of its main properties including stochastic representation, moments and asymmetry and kurtosis coefficients. We also derive the Fisher information matrix, which is shown to be nonsingular for some special cases such as when the asymmetry parameter is null, that is, at the vicinity of symmetry, and discuss maximum-likelihood estimation. Simulation studies for some particular cases and real data analysis are also reported, illustrating the usefulness of the extension considered.  相似文献   

11.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
The local polynomial quasi-likelihood estimation has several good statistical properties such as high minimax efficiency and adaptation of edge effects. In this paper, we construct a local quasi-likelihood regression estimator for a left truncated model, and establish the asymptotic normality of the proposed estimator when the observations form a stationary and α-mixing sequence, such that the corresponding result of Fan et al. [Local polynomial kernel regression for generalized linear models and quasilikelihood functions, J. Amer. Statist. Assoc. 90 (1995), pp. 141–150] is extended from the independent and complete data to the dependent and truncated one. Finite sample behaviour of the estimator is investigated via simulations too.  相似文献   

13.

The sample entropy (Vasicek, 1976) has been most widely used as a nonparametric entropy estimator due to its simplicity, but its underlying distribution function has not been known yet though its moments are required in establishing the entropy-based goodness of test statistic (Soofi et al., 1995). In this paper we derive the nonparametric distribution function of the sample entropy as a piece-wise uniform distribution in the lights of Theil (1980) and Dudwicz and van der Meulen (1987). Then we establish the entropy-based goodness of fit test statistics based on the nonparametric distribution functions of the sample entropy and modified sample entropy (Ebrahimi et al., 1994), and compare their performances for the exponential and normal distributions.  相似文献   

14.
Abstract. In this study we are concerned with inference on the correlation parameter ρ of two Brownian motions, when only high‐frequency observations from two one‐dimensional continuous Itô semimartingales, driven by these particular Brownian motions, are available. Estimators for ρ are constructed in two situations: either when both components are observed (at the same time), or when only one component is observed and the other one represents its volatility process and thus has to be estimated from the data as well. In the first case it is shown that our estimator has the same asymptotic behaviour as the standard one for i.i.d. normal observations, whereas a feasible estimator can still be defined in the second framework, but with a slower rate of convergence.  相似文献   

15.
In modelling financial return time series and time-varying volatility, the Gaussian and the Student-t distributions are widely used in stochastic volatility (SV) models. However, other distributions such as the Laplace distribution and generalized error distribution (GED) are also common in SV modelling. Therefore, this paper proposes the use of the generalized t (GT) distribution whose special cases are the Gaussian distribution, Student-t distribution, Laplace distribution and GED. Since the GT distribution is a member of the scale mixture of uniform (SMU) family of distribution, we handle the GT distribution via its SMU representation. We show this SMU form can substantially simplify the Gibbs sampler for Bayesian simulation-based computation and can provide a mean of identifying outliers. In an empirical study, we adopt a GT–SV model to fit the daily return of the exchange rate of Australian dollar to three other currencies and use the exchange rate to US dollar as a covariate. Model implementation relies on Bayesian Markov chain Monte Carlo algorithms using the WinBUGS package.  相似文献   

16.
In this article, Pitman closeness of sample order statistics to population quantiles of a location-scale family of distributions is discussed. Explicit expressions are derived for some specific families such as uniform, exponential, and power function. Numerical results are then presented for these families for sample sizes n = 10,15, and for the choices of p = 0.10, 0.25, 0.75, 0.90. The Pitman-closest order statistic is also determined in these cases and presented.  相似文献   

17.
In this paper, a new nonparametric methodology is developed for testing whether the changing pattern of a response variable over multiple ordered sub-populations from one treatment group differs with the one from another treatment group. The question is formalized into a nonparametric two-sample comparison problem for the stochastic order among subsamples, through U-statistics with accommodations for zero-inflated distributions. A novel bootstrap procedure is proposed to obtain the critical values with given type I error. Following the procedure, bootstrapped p-values are obtained through simulated samples. It is proven that the distribution of the test statistics is independent from the underlying distributions of the subsamples, when certain sufficient statistics provided. Furthermore, this study also develops a feasible framework for power studies to determine sample sizes, which is necessary in real-world applications. Simulation results suggest that the test is consistent. The methodology is illustrated using a biological experiment with a split-plot design, and significant differences in changing patterns of seed weight between treatments are found with relative small subsample sizes.  相似文献   

18.
Estimation of the Pareto tail index from extreme order statistics is an important problem in many settings. The upper tail of the distribution, where data are sparse, is typically fitted with a model, such as the Pareto model, from which quantities such as probabilities associated with extreme events are deduced. The success of this procedure relies heavily not only on the choice of the estimator for the Pareto tail index but also on the procedure used to determine the number k of extreme order statistics that are used for the estimation. The authors develop a robust prediction error criterion for choosing k and estimating the Pareto index. A Monte Carlo study shows the good performance of the new estimator and the analysis of real data sets illustrates that a robust procedure for selection, and not just for estimation, is needed.  相似文献   

19.
ABSTRACT

In this paper, we start with establishing the existence of a minimal (maximal) Lp (1 < p ? 2) solution to a one-dimensional backward stochastic differential equation (BSDE), where the generator g satisfies a p-order weak monotonicity condition together with a general growth condition in y and a linear growth condition in z. Then, we propose and prove a comparison theorem of Lp (1 < p ? 2) solutions to one-dimensional BSDEs with q-order (1 ? q < p) weak monotonicity and uniform continuity generators. As a consequence, an existence and uniqueness result of Lp (1 < p ? 2) solutions is also given for BSDEs whose generator g is q-order (1 ? q < p) weakly monotonic with a general growth in y and uniformly continuous in z.  相似文献   

20.
Sequential experiment is an indispensable strategy and is applied immensely to various fields of science and engineering. In such experiments, it is desirable that a given design should retain the properties as much as possible when few runs are added to it. The designs based on sequential experiment strategy are called extended designs. In this paper, we have studied theoretical properties of such experimental strategies using uniformity measure. We have also derived a lower bound of extended designs under wrap-around L2-discrepancy measure. Moreover, we have provided an algorithm to construct uniform (or nearly uniform) extended designs. For ease of understanding, some examples are also presented and a lot of sequential strategies for a 27-run original design are tabulated for practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号