首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Global population trends are reviewed. The author concludes that a level of overpopulation is inevitable, not primarily because of problems of food supply but because of the environmental degradation that will result from population increases. The author suggests that these environmental changes will lead to increases in mortality and declines in fertility.  相似文献   

2.
This article develops a statistical test for the presence of a jump in an otherwise smooth transition process. In this testing, the null model is a threshold regression and the alternative model is a smooth transition model. We propose a quasi-Gaussian likelihood ratio statistic and provide its asymptotic distribution, which is defined as the maximum of a two parameter Gaussian process with a nonzero bias term. Asymptotic critical values can be tabulated and depend on the transition function employed. A simulation method to compute empirical critical values is also developed. Finite-sample performance of the test is assessed via Monte Carlo simulations. The test is applied to investigate the dynamics of racial segregation within cities across the United States.  相似文献   

3.
It is a year since the Statistics Authority came into being. Official statistics are now supposed to be independent and on course to be trustworthy—and transformed to serve not just government but society as a whole. Has it all worked out? Ian Maclean of the Statistics Users Forum says a start has been made, but progress has been slower than hoped.  相似文献   

4.
5.
We observe X 1,…,X k , where X i has density f(x i ) possessing monotone likelihood ratio. The best population corresponds to the largest θ i . We select the population corresponding to the largest X i . The goal is to attach the best possible p-value to the inference: the selected population has the uniquely largest θ i . Gutmann and Maymin (1987 Gutmann , S. , Maymin , Z. ( 1987 ). Is the selected population the best? Ann. Statist . 15 : 456461 .[Crossref], [Web of Science ®] [Google Scholar]) considered the location parameter case and derived the supremum of the error probability by conditioning on S, the index of the largest X i . Using this conditioning approach, Kannan and Panchapakesan (2009 Kannan , N. , Panchapakesan , S. ( 2009 ). Does the selected normal population have the smallest variance? Amer. J. Math. Management Sci . 29 : To appear . [Google Scholar]) considered the problem for the gamma family. We consider here a unified approach to both the location and scale parameter cases, and obtain the supremum of the error probability without using conditioning.  相似文献   

6.
The 1978 European Community Typology for Agricultural Holdings is described in this paper and contrasted with a data based, polythetic-multivariate classification based on cluster analysis.

The requirement to reduce the size of the variable set employed in an optimisation-partition method of clustering suggested the value of principal components and factor analysis for the identification of major ‘source’ dimensions against which to measure farm differences and similarities.

The Euclidean cluster analysis incorporating the reduced dimensions quickly converged to a stable solution and was little influenced by the initial number or nature of ‘seeding’ partitions of the data.

The assignment of non-sampled observations from the population to cluster classes was completed using classification functions.

The final scheme, based on a sample of over 2,000 observations, was found to be both capable of interpretation and meaningful in terms of agricultural structure and practice and much superior in its explanatory power when compared with a version of the principal activity typology.  相似文献   


7.
SUMMARY
In using census data, a range of indicators is commonly used to indicate deprivation. This paper examines the validity of these indicators by exploring how well they predict income in surveys (the Family Expenditure Surveys of 1983 and 1990 and the General Household Survey of 1984) which also collect income data. A reasonably parsimonious set of seven socioeconomic variables (as well as controls for age, sex and region) explains about 40% of the variation in log-income. Our results provide a set of weights for a deprivation index and offer no support for the practice of assigning equal weights to the indicators. A census-based proxy would miss a sizable minority of the actual poor and misclassify some with higher incomes. A majority of the `deprived' are poor by a cash yardstick, but some are not.  相似文献   

8.
This paper examines both theoretically and empirically whether the common practice of using OLS multivariate regression models to estimate average treatment effects (ATEs) under experimental designs is justified by the Neyman model for causal inference. Using data from eight large U.S. social policy experiments, the paper finds that estimated standard errors and significance levels for ATE estimators are similar under the OLS and Neyman models when baseline covariates are included in the models, even though theory suggests that this may not have been the case. This occurs primarily because treatment effects do not appear to vary substantially across study subjects.  相似文献   

9.
By means of several historical examples, it is shown that it does not appear to be easy to build bridges between rigorous mathematics and reasonable data-analytic procedures for scientific measurements. After mentioning both some positive and some negative aspects of statistics, a formal framework for statistics is presented which contains the concept formation, derivation of results and interpretation of mathematical statistics as three essential steps. The difficulties especially of interpretation are shown for examples in several areas of statistics, such as asymptotics and robustness. Some problems of statistics in two subject-matter sciences are discussed, and a summary and outlook are given.  相似文献   

10.
Abstract

Technological innovations have made changes in library procedures commonplace. Nonetheless, many librarians have been startled by the University of Nevada at Reno's decision to quit checking in print serials. Four serialists address the future of serials acquisitions in light of the technological advances that continue to transform library procedures. Serials Review 2003; 29:224–229.  相似文献   

11.
Abstract

The move from print to online journal publishing has allowed the proliferation of journal access programs aimed at poor countries. These programs offer access to online journals on very favorable terms to developing country institutions and readers and are based on the premise that developing world scientists can contribute significantly to ameliorating the conditions of life in their countries. The authors give a brief overview of the environment in which these programs emerged, discuss different orientations of the major programs, examine the case of the Health InterNetwork Access to Research Initiative (HINARI), consider why the World Health Organization (WHO) runs a journal access program for developing countries, and conclude with the accomplishments of HINARI.  相似文献   

12.
We study the asymptotic behaviour of the maximum likelihood estimator corresponding to the observation of a trajectory of a skew Brownian motion, through a uniform time discretization. We characterize the speed of convergence and the limiting distribution when the step size goes to zero, which in this case are non‐classical, under the null hypothesis of the skew Brownian motion being an usual Brownian motion. This allows to design a test on the skewness parameter. We show that numerical simulations can be easily performed to estimate the skewness parameter and provide an application in Biology.  相似文献   

13.
14.
15.
Two families of processes: pure jump processes and jump-diffusion processes are widely used in literatures. Recently, empirical findings demonstrate that the underlying processes of high frequency data sets are pure-jump processes of infinite variation in many situations. Statistical tests are also proposed to make the empirical findings theoretically grounded. In this paper, we extend the work of Jing et al. (2012) in two aspects: (1) the jump process in the null hypothesis and the alternative hypothesis could be different; (2) the null hypothesis covers more flexible processes which are more relevant in finance when considering models for asset prices or nominal interest rates. Theoretically, the test is proven to be very powerful and can control the type I error probabilities well under the nominal level.  相似文献   

16.
Most data used to study the durations of unemployment spells come from the Current Population Survey (CPS), which is a point-in-time survey and gives an incomplete picture of the underlying duration distribution. We introduce a new sample of completed unemployment spells obtained from panel data and apply CPS sampling and reporting techniques to replicate the type of data used by other researchers. Predicted duration distributions derived from this CPS-like data are then compared to the actual distribution. We conclude that the best inferences that can be made about unemployment durations by using CPS-like data are seriously biased.  相似文献   

17.
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.  相似文献   

18.
19.
Small business start-ups create most of the new jobs in our economy. It is an attractive argument to those who want governments to support small businesses. But is it true? Or are large firms the ones that generate most jobs? Michael Anyadike-Danes, Mark Hart and Karen Bonner examine a long-running and acrimonious dispute.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号