At present, many international organizations and scholars, who are aimed to compare and assess country-specific economy or competitiveness, have set different standards and indicators and tried to assess the economic strength of individual country. But most of these standards and indicators are for the assessment of individual aspects and what is worse, they are not suitable for the real situation of the countries concerned. This paper deals with methodological issues on the assessment of state economic strength. To this end, authors investigate the preceding studies on the assessment of economy of a given country, conceptualize the state economic strength, set a new system of indicators for assessing it and on this basis, produce a methodology for the synthetic assessment of state economic strength. The findings are that state economic strength must be defined in a view of economic capability which any country can exhibit by itself even under uncertain external environment, the indicators for assessing it include a variety of indicators in line with its essence, and assessment methodology must be synthetic one based on considering the weights of indicators. These findings may help, on the one hand, in assessing the economy of a given country and taking an economic and technical measures for its strengthening for policymakers, and on the other hand, in comparing and assessing the country-specific economy for organizations or scholars in a new perspective.
A partition problem in one-dimensional space is to seek a partition of a set of numbers that maximizes a given objective function.
In some partition problems, the partition size, i.e., the number of nonempty parts in a partition, is fixed; while in others,
the size can vary arbitrarily. We call the former the size-partition problem and the latter the open-partition problem. In
general, it is much harder to solve open problems since the objective functions depend on size. In this paper, we propose
a new approach by allowing empty parts and transform the open problem into a size problem allowing empty parts, called a relaxed-size
problem. While the sortability theory has been established in the literature as a powerful tool to attack size partition problems,
we develop the sortability theory for relaxed-size problems as a medium to solve open problems. 相似文献
This study examines the effects of the school choice policy by utilizing data from the Seoul Education Longitudinal Study. Specifically, the school participation and school satisfaction of parents whose child entered high school in 2010 through the high school choice policy are analyzed. The results reveal that the opportunity for school choice itself is not strongly relevant to parental participation in school. Parental participation in school is influenced more by individual factors than institutional factors. In addition, providing school choice does not lead to an increase in parental school satisfaction. Whether the students actually entered the school they preferred during the school choice phases has more significance than only having the right of choice. Based on the results, the implications of the study and some suggestions for the school choice policy in Korea are discussed. 相似文献
When one or few observations are deleted in the multiple linear regression model, they can affect the variable selection. In this paper we derived the formula for the Mallows Cp criterion when k observations are deleted and express it as a functionn of basic building blocks such as residuals and leverages. Also, two real date sets are used to see how the selected model changes as few observations re deleted. 相似文献
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m0) to overcome this problem. Since m0 is unknown, estimators of m0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m0, which is shown to overestimate m0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings. 相似文献
The problem of improving upon the usual set estimator of a multivariate normal mean has only recently seen significant advances. Improved sets that take advantage of the Stein effect have been constructed. It is shown here that the Stein effect is so powerful that one can construct improved confidence sets that can have zero radius on a set of positive probability. Other, somewhat more sensible, sets which attain arbitrarily small radius are also constructed, and it is argued that one way to eliminate unreasonable confidence sets is through a conditional evaluation. 相似文献
Detecting local spatial clusters for count data is an important task in spatial epidemiology. Two broad approaches—moving window and disease mapping methods—have been suggested in some of the literature to find clusters. However, the existing methods employ somewhat arbitrarily chosen tuning parameters, and the local clustering results are sensitive to the choices. In this paper, we propose a penalized likelihood method to overcome the limitations of existing local spatial clustering approaches for count data. We start with a Poisson regression model to accommodate any type of covariates, and formulate the clustering problem as a penalized likelihood estimation problem to find change points of intercepts in two-dimensional space. The cost of developing a new algorithm is minimized by modifying an existing least absolute shrinkage and selection operator algorithm. The computational details on the modifications are shown, and the proposed method is illustrated with Seoul tuberculosis data. 相似文献