首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Grötschel  M.  Jünger  M.  Reinelt  G. 《Statistical Papers》1983,25(1):261-295

In this paper we present optimum triangulations of a large number of input-output matrices. In particular, we report about a series of (44,44)-matrices of the years 1959, 1965, 1970, 1975 of the countries of the European Community, about all (56, 56)-matrices compiled by Deutsches Institut für Wirtschaftsforschung for the Federal Republic of Germany, and about the (60,60)-matrices of the Statistisches Bundesamt of the Federal Republic of Germany. These optimum triangulation were obtained with a code developed by the authors which utilizes new polyhedral results for the triangulation problem in a linear programming cutting plane framework. With this code the range of solvability of triangulation problems was more than doubled (in terms of sector numbers) compared to previous work. In particular, for none of the triangulation problems mentioned above optimum solutions were known before. Moreover, we discuss various claims about properties of optimum solutions made in the literature and question some common concepts of analysing triangulated input-output matrices.

  相似文献   

2.
The paper considers the problem of finding accurate small sample confidence intervals for regression parameters. Its approach is to construct conditional intervals with good robustness characteristics. This robustness is obtained by the choice of the density under which the conditional interval is computed. Both bounded influence and S-estimate style intervals are given. The required tail area computations are carried out using the results of DiCiccio, Field & Fraser (1990).  相似文献   

3.
Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg’s (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.  相似文献   

4.
Abstract.  A Markov property associates a set of conditional independencies to a graph. Two alternative Markov properties are available for chain graphs (CGs), the Lauritzen–Wermuth–Frydenberg (LWF) and the Andersson–Madigan– Perlman (AMP) Markov properties, which are different in general but coincide for the subclass of CGs with no flags . Markov equivalence induces a partition of the class of CGs into equivalence classes and every equivalence class contains a, possibly empty, subclass of CGs with no flags itself containing a, possibly empty, subclass of directed acyclic graphs (DAGs). LWF-Markov equivalence classes of CGs can be naturally characterized by means of the so-called largest CGs , whereas a graphical characterization of equivalence classes of DAGs is provided by the essential graphs . In this paper, we show the existence of largest CGs with no flags that provide a natural characterization of equivalence classes of CGs of this kind, with respect to both the LWF- and the AMP-Markov properties. We propose a procedure for the construction of the largest CGs, the largest CGs with no flags and the essential graphs, thereby providing a unified approach to the problem. As by-products we obtain a characterization of graphs that are largest CGs with no flags and an alternative characterization of graphs which are largest CGs. Furthermore, a known characterization of the essential graphs is shown to be a special case of our more general framework. The three graphical characterizations have a common structure: they use two versions of a locally verifiable graphical rule. Moreover, in case of DAGs, an immediate comparison of three characterizing graphs is possible.  相似文献   

5.
This paper investigates the applicability of a Monte Carlo technique known as simulated annealing to achieve optimum or sub-optimum decompositions of probabilistic networks under bounded resources. High-quality decompositions are essential for performing efficient inference in probabilistic networks. Optimum decomposition of probabilistic networks is known to be NP-hard (Wen, 1990). The paper proves that cost-function changes can be computed locally, which is essential to the efficiency of the annealing algorithm. Pragmatic control schedules which reduce the running time of the annealing algorithm are presented and evaluated. Apart from the conventional temperature parameter, these schedules involve the radius of the search space as a new control parameter. The evaluation suggests that the inclusion of this new parameter is important for the success of the annealing algorithm for the present problem.  相似文献   

6.
Logistic regression plays an important role in many fields. In practice, we often encounter missing covariates in different applied sectors, particularly in biomedical sciences. Ibrahim (1990) proposed a method to handle missing covariates in generalized linear model (GLM) setup. It is well known that logistic regression estimates using small or medium sized missing data are biased. Considering the missing data that are missing at random, in this paper we have reduced the bias by two methods; first we have derived a closed form bias expression using Cox and Snell (1968), and second we have used likelihood based modification similar to Firth (1993). Here we have analytically shown that the Firth type likelihood modification in Ibrahim led to the second order bias reduction. The proposed methods are simple to apply on an existing method, need no analytical work, with the exception of a little change in the optimization function. We have carried out extensive simulation studies comparing the methods, and our simulation results are also supported by a real world data.  相似文献   

7.
In this paper, we investigate the problem of determining block designs which are optimal under type 1 optimality criteria within various classes of designs having υ treatments arranged in b blocks of size k. The solutions to two optimization problems are given which are related to a general result obtained by Cheng (1978) and which are useful in this investigation. As one application of the solutions obtained, the definition of a regular graph design given in Mitchell and John (1977) is extended to that of a semi-regular graph design and some sufficient conditions are derived for the existence of a semi-regular graph design which is optimal under a given type 1 criterion. A result is also given which shows how the sufficient conditions derived can be used to establish the optimality under a specific type 1 criterion of some particular types of semi- regular graph designs having both equal and unequal numbers of replicates. Finally,some sufficient conditions are obtained for the dual of an A- or D-optimal design to be A- or D-optimal within an appropriate class of dual designs.  相似文献   

8.
In this paper we consider the problem of optimally weighing n objects with N weighings on a chemical balance. Several previously known results are generalized. In particular, the designs shown by Ehlich (1964a) and Payne (1974) to be D-optimal in various classes of weighing designs where N≡2 (mod4) are shown to be optimal with respect to any optimality criterion of Type I as defined in Cheng (1980). Several results on the E-optimality of weighing designs are also given.  相似文献   

9.
A model involving autocorrelated random effects and sampling errors is proposed for small-area estimation, using both time-series and cross-sectional data. The sampling errors are assumed to have a known block-diagonal covariance matrix. This model is an extension of a well-known model, due to Fay and Herriot (1979), for cross-sectional data. A two-stage estimator of a small-area mean for the current period is obtained under the proposed model with known autocorrelation, by first deriving the best linear unbiased prediction estimator assuming known variance components, and then replacing them with their consistent estimators. Extending the approach of Prasad and Rao (1986, 1990) for the Fay-Herriot model, an estimator of mean squared error (MSE) of the two-stage estimator, correct to a second-order approximation for a small or moderate number of time points, T, and a large number of small areas, m, is obtained. The case of unknown autocorrelation is also considered. Limited simulation results on the efficiency of two-stage estimators and the accuracy of the proposed estimator of MSE are présentés.  相似文献   

10.
In this paper we consider the problem of testing hypotheses in parametric models, when only the first r (of n) ordered observations are known.Using divergence measures, a procedure to test statistical hypotheses is proposed, Replacing the parameters by suitable estimators in the expresion of the divergence measure, the test statistics are obtained.Asymptotic distributions for these statistics are given in several cases when maximum likelihood estimators for truncated samples are considered.Applications of these results in testing statistical hypotheses, on the basis of truncated data, are presented.The small sample behavior of the proposed test statistics is analyzed in particular cases.A comparative study of power values is carried out by computer simulation.  相似文献   

11.
It is known that n-cyclic designs provide a flexible class of designs suitable for setting out factorial experiments. In this paper we show that many of these designs are resolvable. Further, an extensive class of practically useful designs can be derived from them by deleting replicates. The properties of the designs compare favourably with those obtained by the algorithm of Williams and John (1996) (Appl. Statist. 45, 39–46).  相似文献   

12.
Yu M  Nan B 《Lifetime data analysis》2006,12(3):345-364
As an alternative to the Cox model, the rank-based estimating method for censored survival data has been studied extensively since it was proposed by Tsiatis [Tsiatis AA (1990) Ann Stat 18:354–372] among others. Due to the discontinuity feature of the estimating function, a significant amount of work in the literature has been focused on numerical issues. In this article, we consider the computational aspects of a family of doubly weighted rank-based estimating functions. This family is rich enough to include both estimating functions of Tsiatis (1990) for the randomly observed data and of Nan et al. [Nan B, Yu M, Kalbfleisch JD (2006) Biometrika (to appear)] for the case-cohort data as special examples. The latter belongs to the biased sampling problems. We show that the doubly weighted rank-based discontinuous estimating functions are monotone, a property established for the randomly observed data in the literature, when the generalized Gehan-type weights are used. Though the estimating problem can be formulated to a linear programming problem as that for the randomly observed data, due to its easily uncontrollable large scale even for a moderate sample size, we instead propose a Newton-type iterated method to search for an approximate solution of the (system of) discontinuous monotone estimating equation(s). Simulation results provide a good demonstration of the proposed method. We also apply our method to a real data example.  相似文献   

13.
Outlier detection is a major topic in robust statistics due to the high practical significance of anomalous observations. Many existing methods, however, either are parametric or cease to perform well when the data are far from linearly structured. In this paper, we propose a quantity, Delaunay outlyingness, that is a nonparametric outlyingness score applicable to data with complicated structure. The approach is based on a well‐known triangulation of the sample, which seems to reflect the sparsity of the pointset to different directions in a useful way. We derive results on the asymptotic behavior of Delaunay outlyingness in case of a sufficiently simple set of observations. Simulations and an application to empirical data are also discussed.  相似文献   

14.
The quasi-likelihood function proposed by Wedderburn [Quasi-likelihood functions, generalized linear models, and the Gauss–Newton method. Biometrika. 1974;61:439–447] broadened the application scope of generalized linear models (GLM) by specifying the mean and variance function instead of the entire distribution. However, in many situations, complete specification of variance function in the quasi-likelihood approach may not be realistic. Following Fahrmeir's [Maximum likelihood estimation in misspecified generalized linear models. Statistics. 1990;21:487–502] treating with misspecified GLM, we define a quasi-likelihood nonlinear models (QLNM) with misspecified variance function by replacing the unknown variance function with a known function. In this paper, we propose some mild regularity conditions, under which the existence and the asymptotic normality of the maximum quasi-likelihood estimator (MQLE) are obtained in QLNM with misspecified variance function. We suggest computing MQLE of unknown parameter in QLNM with misspecified variance function by the Gauss–Newton iteration procedure and show it to work well in a simulation study.  相似文献   

15.
We derive explicit formulas for Sobol's sensitivity indices (SSIs) under the generalized linear models (GLMs) with independent or multivariate normal inputs. We argue that the main-effect SSIs provide a powerful tool for variable selection under GLMs with identity links under polynomial regressions. We also show via examples that the SSI-based variable selection results are similar to the ones obtained by the random forest algorithm but without the computational burden of data permutation. Finally, applying our results to the problem of gene network discovery, we identify through the SSI analysis of a public microarray dataset several novel higher-order gene–gene interactions missed out by the more standard inference methods. The relevant functions for SSI analysis derived here under GLMs with identity, log, and logit links are implemented and made available in the R package Sobol sensitivity.  相似文献   

16.
Quarterly data for the period 1960:1 to 1997:2, conventional tests, a bootstrap simulation approach and a multivariate Rao's F-test have been used to investigate if the causality between government spending and revenue in Finland was changed at the beginning of 1990 due to future plans to create the European Monetary Union (EMU). The results indicate that during the period before 1990, the government revenue Granger-caused spending, while the opposite happened after 1990, which agrees better with Barro's tax smoothing hypothesis. However, when using monthly data instead of quarterly data for almost the same sample period, totally different results have been noted. The general conclusion is that the relationship between spending and revenue in Finland is still not completely understood. The ambiguity of these results may well be due to the fact that there are several time scales involved in the relationship, and that the conventional analyses may be inadequate to separate out the time scale structured relationships between these variables. Therefore, to investigate empirically the relation between these variables we attempt to use the wavelets analysis that enables us to separate out different time scales of variation in the data. We find that time scale decomposition is important for analysing these economic variables.  相似文献   

17.
It is widely known that bootstrap failure can often be remedied by using a technique known as the ' m out of n ' bootstrap, by which a smaller number, m say, of observations are resampled from the original sample of size n . In successful cases of the bootstrap, the m out of n bootstrap is often deemed unnecessary. We show that the problem of constructing nonparametric confidence intervals is an exceptional case. By considering a new class of m out of n bootstrap confidence limits, we develop a computationally efficient approach based on the double bootstrap to construct the optimal m out of n bootstrap intervals. We show that the optimal intervals have a coverage accuracy which is comparable with that of the classical double-bootstrap intervals, and we conduct a simulation study to examine their performance. The results are in general very encouraging. Alternative approaches which yield even higher order accuracy are also discussed.  相似文献   

18.
By means of Monte Carlo simulations we study the irreversible, random, sequential filling of small clusters (e.g., pairs, triples,...) on linear, square, and cubic lattices. In particular, we are interested in the fraction of sites filled at saturation (the point at which further filling is not possible without rearrangement of the filled and empty sites). The results obtained show good agreement with those of previously developed analytic techniques.

We present the first extensive results for filling linear strings of lattice sites by use of the end-on mechanism (where the ends of the string are chosen sequentially rather than simultaneously as in conventional filling). For end-on filling we find that the saturation coverage increases, relative to conventional filling, for short strings, but decreases as we go to the limit of infinitely long strings (the car-parking problem).

An examination of the Palasti conjecture (and its extension to discrete lattices) is also made.  相似文献   

19.
In this paper we consider a binary, monotone system whose component states are dependent through the possible occurrence of independent common shocks, i.e. shocks that destroy several components at once. The individual failure of a component is also thought of as a shock. Such systems can be used to model common cause failures in reliability analysis. The system may be a technological one, or a human being. It is observed until it fails or dies. At this instant, the set of failed components and the failure time of the system are noted. The failure times of the components are not known. These are the so-called autopsy data of the system. For the case of independent components, i.e. no common shocks, Meilijson (1981), Nowik (1990), Antoine et al . (1993) and GTsemyr (1998) discuss the corresponding identifiability problem, i.e. whether the component life distributions can be determined from the distribution of the observed data. Assuming a model where autopsy data is known to be enough for identifia bility, Meilijson (1994) goes beyond the identifiability question and into maximum likelihood estimation of the parameters of the component lifetime distributions based on empirical autopsy data from a sample of several systems. He also considers life-monitoring of some components and conditional life-monitoring of some other. Here a corresponding Bayesian approach is presented for the shock model. Due to prior information one advantage of this approach is that the identifiability problem represents no obstacle. The motivation for introducing the shock model is that the autopsy model is of special importance when components can not be tested separately because it is difficult to reproduce the conditions prevailing in the functioning system. In Gåsemyr & Natvig (1997) we treat the Bayesian approach to life-monitoring and conditional life- monitoring of components  相似文献   

20.

In this paper, we consider testing for linearity against a well-known class of regime switching models known as the smooth transition autoregressive (STAR) models. Apart from the model selection issues, one reason for interest in testing for linearity in time-series models is that non-linear models such as the STAR are considerably more difficult to use. This testing problem is non-standard because a nuisance parameter becomes unidentified under the null hypothesis. In this paper, we further explore the class of tests proposed by Luukkonen, Saikonnen and Terasvirta (1988). Luukkonen et al . (1988) proposed LM tests for linearity against STAR models. A potential difficulty here is that the linear approximation introduces high leverage points, and hence outliers are likely to be quite influential. To overcome this difficulty, we use the same approximating linear model of Luukkonen et al . (1988), but we apply Wald and F -tests based on l 1 - and bounded influence estimates. The efficiency gains of this procedure cannot be easily deduced from the existing theoretical results because the test is based on a misspecified model under H 1 . Therefore, we carried out a simulation study, in which we observed that the robust tests have desirable properties compared to the test of Luukkonen et al . (1988) for a range of error distributions in the STAR model, in particular the robust tests have power advantages over the LM test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号