首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article examines the determinants of energy demand for nearly 9,000 institutional buildings in the United States. The data were collected, as part of the federal Institutional Conservation Program, by state energy offices using mail surveys. The article presents energy demand estimates adjusted for differences in state surveys as well as for nonresponse bias, as functions of energy prices, building characteristics, and fuel-type variables for approximations of the installed heating ventilation and air-conditioning equipment. Energy price elasticities are found to vary from ?.28 for schools to ?1.05 for hospitals.  相似文献   

2.
Optimal statistical tests, using the normality assumptions for general interval hypotheses including equivalence testing and testing for nonzero difference (or for non-unit) are presented. These tests are based on the decision theory for Polya Type distributions and are compared with usual confidence tests and with ’two one-sided tests’- procedures. A formal relationship between some optimal tests and the Anderson and Hauck procedure as well as a procedure recommended by Patel and Gupta is given. A new procedure for a generalisation of Student's test as well as for equivalence testing for thet-statistics is shown.  相似文献   

3.
Using Cox regression as the main platform, we study the ensemble approach for variable selection. We use a popular real-data example as well as simulated data with various censoring levels to illustrate the usefulness of the ensemble approach, and study the nature of these ensembles in terms of their strength and diversity. By relating these characteristics to the ensemble's selection accuracy, we provide useful insights for how to choose among different ensemble strategies, as well as guidelines for thinking about how to design more effective ensembles.  相似文献   

4.
This article considers the Marsaglia effect by proposing a new test of randomness for Lehmer random number generators. Our test is based on the Manhattan distance criterion between consecutive pairs of random numbers rather than the usually adopted Euclidian distance. We derive the theoretical distribution functions for the Manhattan distance for both overlapping (two dimensional) as well as non-overlapping cases. Extensive goodness-of-fit testing as well as empirical experimentation provides ample proof of the merits of the proposed criterion.  相似文献   

5.
Conditional Studentized Survival Tests for Randomly Censored Models   总被引:1,自引:0,他引:1  
It is shown that in the case of heterogenous censoring distributions Studentized survival tests can be carried out as conditional permutation tests given the order statistics and their censoring status. The result is based on a conditional central limit theorem for permutation statistics. It holds for linear test statistics as well as for sup-statistics. The procedure works under one of the following general circumstances for the two-sample problem: the unbalanced sample size case, highly censored data, certain non-convergent weight functions or under alternatives. For instance, the two-sample log rank test can be carried out asymptotically as a conditional test if the relative amount of uncensored observations vanishes asymptotically as long as the number of uncensored observations becomes infinite. Similar results hold whenever the sample sizes and are unbalanced in the sense that and hold.  相似文献   

6.
Two questions of interest involving nonparametric multiple comparisons are considered. The first question concerns whether it is appropriate to use a multiple comparison procedure as a test of the equality of k treatments, and if it is, which procedure performs best as a test. Our results show that for smaller k values some multiple comparison procedures perform well as tests. The second question concerns whether a joint ranking or a separate ranking multiple comparison procedure performs better as a test and as a device for treatment separation. We find that the joint ranking procedure does slightly better as a test, but for treatment separation the answer depends on the situation.  相似文献   

7.
A simple confidence region is proposed for the multinomial parameter. It is designed for situations having zero cell counts. Simulation studies as well as a real data application show that it performs at least as well as than at least two of the most common confidence regions.  相似文献   

8.
In this paper, we present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration.  相似文献   

9.
For two-way layouts in a between subjects ANOVA design the aligned rank transform (ART) is compared with the parametric F-test as well as six other nonparametric methods: rank transform (RT), inverse normal transform (INT), a combination of ART and INT, Puri & Sen's L statistic, van der Waerden and Akritas & Brunners ATS. The type I error rates are computed for the uniform and the exponential distributions, both as continuous and in several variations as discrete distribution. The computations had been performed for balanced and unbalanced designs as well as for several effect models. The aim of this study is to analyze the impact of discrete distributions on the error rate. And it is shown that this scaling impact is restricted to the ART- as well as the combination of ART- and INT-method. There are two effects: first with increasing cell counts their error rates rise beyond any acceptable limit up to 20 percent and more. And secondly their rates rise when the number of distinct values of the dependent variable decreases. This behavior is more severe for underlying exponential distributions than for uniform distributions. Therefore there is a recommendation not to apply the ART if the mean cell frequencies exceed 10.  相似文献   

10.
Contours may be viewed as the 2D outline of the image of an object. This type of data arises in medical imaging as well as in computer vision and can be modeled as data on a manifold and can be studied using statistical shape analysis. Practically speaking, each observed contour, while theoretically infinite dimensional, must be discretized for computations. As such, the coordinates for each contour as obtained at k sampling times, resulting in the contour being represented as a k-dimensional complex vector. While choosing large values of k will result in closer approximations to the original contour, this will also result in higher computational costs in the subsequent analysis. The goal of this study is to determine reasonable values for k so as to keep the computational cost low while maintaining accuracy. To do this, we consider two methods for selecting sample points and determine lower bounds for k for obtaining a desired level of approximation error using two different criteria. Because this process is computationally inefficient to perform on a large scale, we then develop models for predicting the lower bounds for k based on simple characteristics of the contours.  相似文献   

11.
In this article, we consider experimental situations where a blocked regular two-level fractional factorial initial design is used. We investigate the use of the semi-fold technique as a follow-up strategy for de-aliasing effects that are confounded in the initial design as well as an alternative method for constructing blocked fractional factorial designs. A construction method is suggested based on the full foldover technique and sufficient conditions are obtained when the semi-fold yields as many estimable effects as the full foldover.  相似文献   

12.
In this article the authors show how by adequately decomposing the null hypothesis of the multi-sample block-scalar sphericity test it is possible to obtain the likelihood ratio test statistic as well as a different look over its exact distribution. This enables the construction of well-performing near-exact approximations for the distribution of the test statistic, whose exact distribution is quite elaborate and non-manageable. The near-exact distributions obtained are manageable and perform much better than the available asymptotic distributions, even for small sample sizes, and they show a good asymptotic behavior for increasing sample sizes as well as for increasing number of variables and/or populations involved.  相似文献   

13.
The variance of short-term systematic measurement errors for the difference of paired data is estimated. The difference of paired data is determined by subtracting the measurement results of two methods, which measure the same item only once without measurement repetition. The unbiased estimators for short-term systematic measurement error variances based on the one-way random effects model are not fit for practical purpose because they can be negative. The estimators, which are derived for balanced data as well as for unbalanced data, are always positive but biased. The basis of these positive estimators is the one-way random effects model. The biases, variances, and the mean squared errors of the positive estimators are derived as well as their estimators. The positive estimators are fit for practical purpose.  相似文献   

14.
This paper derives several Lagrange Multiplier tests for the unbalanced nested error component model. Economic data with a natural nested grouping include firms grouped by industry; or students grouped by schools. The LM tests derived include the joint test for both effects as well as the test for one effect conditional on the presence of the other. The paper also derives the standardized versions of these tests, their asymptotic locally mean most powerful version as well as their robust to local misspecification version. Monte Carlo experiments are conducted to study the performance of these LM tests.  相似文献   

15.
A hierarchical model for extreme wind speeds   总被引:3,自引:0,他引:3  
Summary.  A typical extreme value analysis is often carried out on the basis of simplistic inferential procedures, though the data being analysed may be structurally complex. Here we develop a hierarchical model for hourly gust maximum wind speed data, which attempts to identify site and seasonal effects for the marginal densities of hourly maxima, as well as for the serial dependence at each location. A Gaussian model for the random effects exploits the meteorological structure in the data, enabling increased precision for inferences at individual sites and in individual seasons. The Bayesian framework that is adopted is also exploited to obtain predictive return level estimates at each site, which incorporate uncertainty due to model estimation, as well as the randomness that is inherent in the processes that are involved.  相似文献   

16.
An approach to teaching linear regression with unbalanced data is outlined that emphasizes its role as a method of adjustment for associated regressors. The method is introduced via direct standardization, a simple form of regression for categorical regressors. Properties of regression in the presence of association and interaction are emphasized. Least squares is introduced as a more efficient way of calculating adjusted effects for which exact decompositions of the variance are possible. Interval-scaled regressors are initially grouped and treated as categorical; polynomial regression and analysis of covariance can be introduced later as alternative methods.  相似文献   

17.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   

18.
"This article presents and implements a new method for making stochastic population forecasts that provide consistent probability intervals. We blend mathematical demography and statistical time series methods to estimate stochastic models of fertility and mortality based on U.S. data back to 1900 and then use the theory of random-matrix products to forecast various demographic measures and their associated probability intervals to the year 2065. Our expected total population sizes agree quite closely with the Census medium projections, and our 95 percent probability intervals are close to the Census high and low scenarios. But Census intervals in 2065 for ages 65+ are nearly three times as broad as ours, and for 85+ are nearly twice as broad. In contrast, our intervals for the total dependency and youth dependency ratios are more than twice as broad as theirs, and our ratio for the elderly dependency ratio is 12 times as great as theirs. These items have major implications for policy, and these contrasting indications of uncertainty clearly show the limitations of the conventional scenario-based methods."  相似文献   

19.
《随机性模型》2013,29(4):555-568
The covariance of the number of renewals in a fixed time N t and the ensuing excess life time Y t is derived using matrix-analytic methods for the stationary PH-renewal process. Specific results for the Erlang and hyperexponential processes are provided to illustrate the ease of computation. Properties concerning the sign and the behavior of the covariance as t→∞ are provided throughout. Parameter estimation for renewal processes which cannot be fully observed serves as the motivation for our derivations. These statistical applications as well as links to estimation for service time distributions in queues shed light on the type of problems for which the covariance is useful.  相似文献   

20.
This paper deals with the problem of finding nearly D-optimal designs for multivariate quadratic regression on a cube which take as few observations as possible and still allow estimation of all parameters. It is shown that among the class of all such designs taking as many observations as possible on the corners of the cube there is one which is asymptotically efficient as the dimension of the cube increases. Methods for constructing designs in this class, using balanced arrays, are given. It is shown that the designs so constructed for dimensions ≤6 compare well with existing computer generated designs, and in dimensions 5 and 6 are better than those in literature prior to 1978.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号