首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The analysis of jury size and jury verdicts in criminal m a t t e r s now has along , though interrupted , history . Work In this subject in the 18th and 19th centuries by Condorcet arid Laplace is discussed and the Poisson model of the 1830's is highighted . The latter is modified t o analyze the America1 jury experience of the 20th century. Recent U. S. Supreme Court decis on sin the 1970's on jury size and jury decisiol - making have created a resurgence of interest especially on a comparison of six member and twelve member juries . Some comparisons of size in terms of probabilities of errors invericts are presented.  相似文献   

2.
Sometimes additive hazard rate model becomes more important to study than the celebrated (Cox, 1972) proportional hazard rate model. But the concept of the hazard function is sometimes abstract, in comparison to the concept of mean residual life function. In this paper, we have defined a new model called ‘dynamic additive mean residual life model’ where the covariates are time dependent, and study the closure of this model under different stochastic orders.  相似文献   

3.
ABSTRACT

For experiments running in field plots or over time, the observations are often correlated due to spatial or serial correlation, which leads to correlated errors in a linear model analyzing the treatment means. Without knowing the exact correlation matrix of the errors, it is not possible to compute the generalized least-squares estimator for the treatment means and use it to construct optimal designs for the experiments. In this paper, we propose to use neighborhoods to model the covariance matrix of the errors, and apply a modified generalized least-squares estimator to construct robust designs for experiments with blocks. A minimax design criterion is investigated, and a simulated annealing algorithm is developed to find robust designs. We have derived several theoretical results, and representative examples are presented.  相似文献   

4.
Abstract

Under non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely.  相似文献   

5.
A random field displays long (resp. short) memory when its covariance function is absolutely non-summable (resp. summable), or alternatively when its spectral density (spectrum) is unbounded (resp. bounded) at some frequencies. Drawing on the spectrum approach, this paper characterizes both short and long memory features in the spatial autoregressive model. The data generating process is presented as a sequence of spatial autoregressive micro-relationships. The study elaborates the exact conditions under which short and long memories emerge for micro-relationships and for the aggregated field as well. To study the spectrum of the aggregated field, we develop a new general concept referred to as the ‘root order of a function’. This concept might be usefully applied in studying the convergence of some special integrals. We illustrate our findings with simulation experiments and an empirical application based on Gross Domestic Product data for 100 countries spanning over 1960–2004.  相似文献   

6.
In the context of ACD models for ultra-high frequency data different specifications are available to estimate the conditional mean of intertrade durations, while quantiles estimation has been completely neglected by literature, even if to trading extent it can be more informative. The main problem arising with quantiles estimation is the correct specification of durations probability law: the usual assumption of Exponentially distributed residuals, is very robust for the estimation of parameters of the conditional mean, but dramatically fails the distributional fit. In this paper a semiparametric approach is formalized, and compared with the parametric one, deriving from Exponential assumption. Empirical evidence for a stock of Italian financial market strongly supports the former approach.Paola Zuccolotto: The author wishes to thank Prof. A. Mazzali, Dott. G. De Luca, Dott. M. Sandri for valuable comments.  相似文献   

7.
Genichi Taguchi has emphasized the use of designed experiments in several novel and important applications. In this paper we focus on the use of statistical experimental designs in designingproducts to be robust to environmental conditions. The engineering concept of robust product design is very important because it is frequently impossible or prohibitively expensive to control or eliminate variation resulting from environmental conditions. Robust product design enablesthe experimenter to discover how to modify the design of the product to minimize the effect dueto variation from environmental sources. In experiments of this kind, Taguchi's total experimental arrangement consists of a cross-product of two experimental designs:an inner array containing the design factors and an outer array containing the environmental factors. Except in situations where both these arrays are small, this arrangement may involve a prohibitively large amount of experimental work. One of the objectives of this paper is to show how this amount of work can be reduced. In this paper we investigate the applicability of split-plot designs for thisparticular experimental situation. Consideration of the efficiency of split-plot designs and anexamination of several variants of split-plot designs indicates that experiments conductedin a split-plot mode can be of tremendous value in robust product design since they not only enable the contrasts of interest to be estimated efficiently but also the experiments can be considerably easier to conduct than the designs proposed by Taguchi.  相似文献   

8.
In all empirical or experimental sciences, it is a standard approach to present results, additionally to point estimates, in form of confidence intervals on the parameters of interest. The length of a confidence interval characterizes the accuracy of the whole findings. Consequently, confidence intervals should be constructed to hold a desired length. Basic ideas go back to Stein (1945) and Seelbinder (1953) who proposed a two-stage procedure for hypothesis testing about a normal mean. Tukey (1953) additionally considered the probability or power a confidence interval should possess to hold its length within a desired boundary. In this paper, an adaptive multi-stage approach is presented that can be considered as an extension of Stein's concept. Concrete rules for sample size updating are provided. Following an adaptive two-stage design of O’Brien and Fleming (1979) type, a real data example is worked out in detail.  相似文献   

9.
The main goal in small area estimation is to use models to ‘borrow strength’ from the ensemble because the direct estimates of small area parameters are generally unreliable. However, model-based estimates from the small areas do not usually match the value of the single estimate for the large area. Benchmarking is done by applying a constraint, internally or externally, to ensure that the ‘total’ of the small areas matches the ‘grand total’. This is particularly useful because it is difficult to check model assumptions owing to the sparseness of the data. We use a Bayesian nested error regression model, which incorporates unit-level covariates and sampling weights, to develop a method to internally benchmark the finite population means of small areas. We use two examples to illustrate our method. We also perform a simulation study to further assess the properties of our method.  相似文献   

10.
Abstract.  Properties of a specification test for the parametric form of the variance function in diffusion processes are discussed. The test is based on the estimation of certain integrals of the volatility function. If the volatility function does not depend on the variable x it is known that the corresponding statistics have an asymptotic normal distribution. However, most models of mathematical finance use a volatility function which depends on the state x . In this paper we prove that in the general case, where σ depends also on x the estimates of integrals of the volatility converge stably in law to random variables with a non-standard limit distribution. The limit distribution depends on the diffusion process X t itself and we use this result to develop a bootstrap test for the parametric form of the volatility function, which is consistent in the general diffusion model.  相似文献   

11.
The standard approach to non-parametric bivariate density estimation is to use a kernel density estimator. Practical performance of this estimator is hindered by the fact that the estimator is not adaptive (in the sense that the level of smoothing is not sensitive to local properties of the density). In this paper a simple, automatic and adaptive bivariate density estimator is proposed based on the estimation of marginal and conditional densities. Asymptotic properties of the estimator are examined, and guidance to practical application of the method is given. Application to two examples illustrates the usefulness of the estimator as an exploratory tool, particularly in situations where the local behaviour of the density varies widely. The proposed estimator is also appropriate for use as a pilot estimate for an adaptive kernel estimate, since it is relatively inexpensive to calculate.  相似文献   

12.
A. Wong 《Statistical Papers》1998,39(2):189-201
The use of the Pareto distribution as a model for various socio-economic phenomena dates back to the late nineteenth century. Recently, it has also been recognized as a useful model for the analysis of lifetime data. In this paper, we apply the approximate studentization method to obtain inference for the scale parameter of the Pareto distribution, and also for the strong Pareto law. Moreover, we extend the method to construct prediction limits for thejth smallest future observation based on the firstk observed data.  相似文献   

13.
The goal of this paper is to compare the performance of two estimation approaches, the quasi-likelihood estimating equation and the pseudo-likelihood equation, against model mis-specification for non-separable binary data. This comparison, to the authors’ knowledge, has not been done yet. In this paper, we first extend the quasi-likelihood work on spatial data to non-separable binary data. Some asymptotic properties of the quasi-likelihood estimate are also briefly discussed. We then use the techniques of a truncated Gaussian random field with a quasi-likelihood type model and a Gibbs sampler with a conditional model in the Markov random field to generate spatial–temporal binary data, respectively. For each simulated data set, both of the estimation methods are used to estimate parameters. Some discussion about the simulation results are also included.  相似文献   

14.
Summary.  Cancer surveillance research requires accurate estimates of risk factors at the small area level. These risk factors are often obtained from surveys such as the National Health Interview Survey (NHIS) or the Behavioral Risk Factors Surveillance System (BRFSS). The NHIS is a nationally representative, face-to-face survey with a high response rate; however, it cannot produce state or substate estimates of risk factor prevalence because the sample sizes are too small and small area identifiers are unavailable to the public. The BRFSS is a state level telephone survey that excludes non-telephone households and has a lower response rate, but it does provide reasonable sample sizes in all states and many counties and has publicly available small area identifiers (counties). We propose a novel extension of dual-frame estimation using propensity scores that allows the complementary strengths of each survey to compensate for the weakness of the other. We apply this method to obtain 1999–2000 county level estimates of adult male smoking prevalence and mammogram usage rates among females who were 40 years old and older. We consider evidence that these NHIS-adjusted estimates reduce the effects of selection bias and non-telephone coverage in the BRFSS. Data from the Current Population Survey Tobacco Use Supplement are also used to evaluate the performance of this approach. A hybrid estimator that selects one of the two estimators on the basis of the mean-square error is also considered.  相似文献   

15.
In this paper we propose a new nonparametric estimator of the conditional distribution function under a semiparametric censorship model. We establish an asymptotic representation of the estimator as a sum of iid random variables, balanced by some kernel weights. This representation is used for obtaining large sample results such as the rate of uniform convergence of the estimator, or its limit distributional law. We prove that the new estimator outperforms the conditional Kaplan–Meier estimator for censored data, in the sense that it exhibits lower asymptotic variance. Illustration through real data analysis is provided.  相似文献   

16.
Usually, two different types of shock models (extreme and cumulative shock models) are employed to model the dynamic risk processes. In extreme shock models, only the impact of the current fatal shock is usually taken into account, whereas, in cumulative shock models, the impact of the preceding shocks is accumulated as well. However, in practice, the effect of the corresponding shock can be realized in those two ways in one model (i.e., it can be fatal or, otherwise it is accumulated). This observation justifies the consideration of a ‘combined shock model’ in the risk modeling and analysis. In this paper, we generalize the study of the dynamic risk processes that were previously considered in the literature. The main theme of this paper is to find the optimal allocation policies for the generalized combined risk processes via the stochastic comparisons of survival functions. It will be seen that the obtained results hold for ‘general counting processes’ of shocks. In addition, we consider the problem of maximizing a gain function under certain risks and obtain reasonable decisions based on a variability measure. Furthermore, the meaningful explanations for the results on the policy ordering will be provided.  相似文献   

17.
Sample entropy based tests, methods of sieves and Grenander estimation type procedures are known to be very efficient tools for assessing normality of underlying data distributions, in one-dimensional nonparametric settings. Recently, it has been shown that the density based empirical likelihood (EL) concept extends and standardizes these methods, presenting a powerful approach for approximating optimal parametric likelihood ratio test statistics, in a distribution-free manner. In this paper, we discuss difficulties related to constructing density based EL ratio techniques for testing bivariate normality and propose a solution regarding this problem. Toward this end, a novel bivariate sample entropy expression is derived and shown to satisfy the known concept related to bivariate histogram density estimations. Monte Carlo results show that the new density based EL ratio tests for bivariate normality behave very well for finite sample sizes. To exemplify the excellent applicability of the proposed approach, we demonstrate a real data example.  相似文献   

18.
A means for utilizing auxiliary information in surveys is to sample with inclusion probabilities proportional to given size values, to use a πps design, preferably with fixed sample size. A novel candidate in that context is Pareto πps. This sampling scheme was derived by limit considerations and it works with a degree of approximation for finite samples. Desired and factual inclusion probabilities do not agree exactly, which in turn leads to some estimator bias. The central topic in this paper is to derive conditions for the bias to be negligible.Practically useful information on small sample behavior of Pareto πps can, to the best of our understanding, be gained only by numerical studies. Earlier investigations to that end have been too limited to allow general conclusions, while this paper reports on findings from an extensive numerical study. The chief conclusion is that the estimator bias is negligible in almost all situations met in survey practice.  相似文献   

19.
Abstract.  Multivariate correlated failure time data arise in many medical and scientific settings. In the analysis of such data, it is important to use models where the parameters have simple interpretations. In this paper, we formulate a model for bivariate survival data based on the Plackett distribution. The model is an alternative to the Gamma frailty model proposed by Clayton and Oakes. The parameter in this distribution has a very appealing odds ratio interpretation for dependence between the two failure times; in addition, it allows for negative dependence. We develop novel semiparametric estimation and inference procedures for the model. The asymptotic results of the estimator are developed. The performance of the proposed techniques in finite samples is examined using simulation studies; in addition, the proposed methods are applied to data from an observational study in cancer.  相似文献   

20.
In this paper, a new tightening concept has been incorporated into the single-level continuous sampling plan CSP-1, such that quality degradation will warrant sampling inspection to cease beyond a certain number of sampled items, until new evidence of good quality is established. The expressions of the performance measures for this new plan, such as the operating characteristic, average outgoing quality and average fraction inspected, are derived using a Markov chain model. The advantage of the tightened CSP-1 plan is that it is possible to lower the average outgoing quality limit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号