首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于1980-2005年31个省区的时空数据,分析了中国水泥消费量与经济发展的关系及地域需求模型,结果发现:25年来中国水泥消费量随着人口和人均GDP的增长而呈同步增长态势,水泥消费总量是人口总量与人均GDP及固定资产投资的Cobb-Douglas函数,水泥消费量与固定资产投资总额呈双对数关系;依据1996年、2000年、2005年三个时段截面数据的分析,发现31个省区水泥消费量与人口总量、人均GDP(或人均固定资产投资额)具有Cobb-Douglas函数双因素驱动特征,给出了3个时段的模拟结果,并进行了有关弹性特征的分析,为中国水泥生产和地区布局规划提供了科学依据。  相似文献   

2.
In longitudinal data analysis with random subject effects, there is often within subject serial correlation and possibly unequally spaced observations. This serial correlation can be partially confounded with the random between subject effects. In real data, it is often not clear whether there is serial correlation, random subject effects or both. Using inference based on the likelihood function, it is not always possible to identify the correct model, especially in small samples. However, it is important that some effort be made to attempt to find a good model rather than just making assumptions. This often means trying models with random coefficients, with serial correlation, and with both. Model selection criteria such as likelihood ratio tests and Akaike's Information Criterion (AIC) can be used. The problem of modelling serial correlation with unequally spaced observations is addressed. A real data example is presented where there is an apparent heterogeneity of variances, possible serial correlation and between subject random effects. In this example, it turns out that the random subject effects explains both the serial correlation and the variance heterogeneity.  相似文献   

3.
Data in the form of proportions with extra-dispersion (over/under) arise in many biomedical, epidemiological, and toxicological applications. In some situations, two samples of data in the form of proportions with extra-dispersion arise in which the problem is to test the equality of the proportions in the two groups with unspecified and possibly unequal extra-dispersion parameters. This problem is analogous to the traditional Behrens-Fisher problem in which two normal population means with possibly unequal variances are compared. To deal with this problem we develop eight tests and compare them in terms of empirical size and power, using a simulation study. Simulations show that a C(α) test based on extended quasi-likelihood estimates of the nuisance parameters holds nominal level most effectively (close to the nominal level) and it is at least as powerful as any other statistic that is not liberal. It has the simplest formula, is based on estimates of the nuisance parameters only under the null hypothesis, and is easiest to calculate. Also, it is robust in the sense that no distributional assumption is required to develop this statistic.  相似文献   

4.
A problem where one subpopulation is compared with several other subpopulations in terms of means with the goal of estimating the smallest difference between the means commonly arises in biology, medicine, and many other scientific fields. A generalization of Strass-burger-Bretz-Hochberg approach for two comparisons is presented for cases with three and more comparisons. The method allows constructing an interval estimator for the smallest mean difference, which is compatible with the Min test. An application to a fluency-disorder study is illustrated. Simulations confirmed adequate probability coverage for normally distributed outcomes for a number of designs.  相似文献   

5.
In this paper, a new nonparametric methodology is developed for testing whether the changing pattern of a response variable over multiple ordered sub-populations from one treatment group differs with the one from another treatment group. The question is formalized into a nonparametric two-sample comparison problem for the stochastic order among subsamples, through U-statistics with accommodations for zero-inflated distributions. A novel bootstrap procedure is proposed to obtain the critical values with given type I error. Following the procedure, bootstrapped p-values are obtained through simulated samples. It is proven that the distribution of the test statistics is independent from the underlying distributions of the subsamples, when certain sufficient statistics provided. Furthermore, this study also develops a feasible framework for power studies to determine sample sizes, which is necessary in real-world applications. Simulation results suggest that the test is consistent. The methodology is illustrated using a biological experiment with a split-plot design, and significant differences in changing patterns of seed weight between treatments are found with relative small subsample sizes.  相似文献   

6.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time.  相似文献   

7.
Grouped survival data with possible interval censoring arise in a variety of settings. This paper presents nonparametric Bayes methods for the analysis of such data. The random cumulative hazard, common to every subject, is assumed to be a realization of a Lévy process. A time-discrete beta process, introduced by Hjort, is considered for modeling the prior process. A sampling-based Monte Carlo algorithm is used to find posterior estimates of several quantities of interest. The methodology presented here is used to check further modeling assumptions. Also, the methodology developed in this paper is illustrated with data for the times to cosmetic deterioration of breast-cancer patients. An extension of the methodology is presented to deal with two interval-censored times in tandem data (as with some AIDS incubation data).  相似文献   

8.
Poisson sampling is a method for unequal probabilities sampling with random sample size. There exist several implementations of the Poisson sampling design, with fixed sample size, which almost all are rejective methods, that is, the sample is not always accepted. Thus, the existing methods can be time-consuming or even infeasible in some situations. In this paper, a fast and non-rejective method, which is efficient even for large populations, is proposed and studied. The method is a new design for selecting a sample of fixed size with unequal inclusion probabilities. For the population of large size, the proposed design is very close to the strict πps sampling which is similar to the conditional Poisson (CP) sampling design, but the implementation of the design is much more efficient than the CP sampling. And the inclusion probabilities can be calculated recursively.  相似文献   

9.
Summary This paper is concerned with the designs in which each experimental unit is assigned more than once to a treatment, either different or identical. An easy method of constructing balanced minimal repeated measurements designs with unequal period sizes is presented whenever the number of periods is less than the number of treatments. Strongly balanced minimal repeated measurements designs with unequal period sizes are also constructed whenever the number of periods is less than the number of treatments.  相似文献   

10.
ABSTRACT

This paper presents a modified skew-normal (SN) model that contains the normal model as a special case. Unlike the usual SN model, the Fisher information matrix of the proposed model is always non-singular. Despite of this desirable property for the regular asymptotic inference, as with the SN model, in the considered model the divergence of the maximum likelihood estimator (MLE) of the skewness parameter may occur with positive probability in samples with moderate sizes. As a solution to this problem, a modified score function is used for the estimation of the skewness parameter. It is proved that the modified MLE is always finite. The quasi-likelihood approach is considered to build confidence intervals. When the model includes location and scale parameters, the proposed method is combined with the unmodified maximum likelihood estimates of these parameters.  相似文献   

11.
Summary.  Smoothing splines via the penalized least squares method provide versatile and effective nonparametric models for regression with Gaussian responses. The computation of smoothing splines is generally of the order O ( n 3), n being the sample size, which severely limits its practical applicability. We study more scalable computation of smoothing spline regression via certain low dimensional approximations that are asymptotically as efficient. A simple algorithm is presented and the Bayes model that is associated with the approximations is derived, with the latter guiding the porting of Bayesian confidence intervals. The practical choice of the dimension of the approximating space is determined through simulation studies, and empirical comparisons of the approximations with the exact solution are presented. Also evaluated is a simple modification of the generalized cross-validation method for smoothing parameter selection, which to a large extent fixes the occasional undersmoothing problem that is suffered by generalized cross-validation.  相似文献   

12.
In recent years much effort has been devoted to maximum likelihood estimation of generalized linear mixed models. Most of the existing methods use the EM algorithm, with various techniques in handling the intractable E-step. In this paper, a new implementation of a stochastic approximation algorithm with Markov chain Monte Carlo method is investigated. The proposed algorithm is computationally straightforward and its convergence is guaranteed. A simulation and three real data sets, including the challenging salamander data, are used to illustrate the procedure and to compare it with some existing methods. The results indicate that the proposed algorithm is an attractive alternative for problems with a large number of random effects or with high dimensional intractable integrals in the likelihood function.  相似文献   

13.
Abstract

In this article, we study the variable selection and estimation for linear regression models with missing covariates. The proposed estimation method is almost as efficient as the popular least-squares-based estimation method for normal random errors and empirically shown to be much more efficient and robust with respect to heavy tailed errors or outliers in the responses and covariates. To achieve sparsity, a variable selection procedure based on SCAD is proposed to conduct estimation and variable selection simultaneously. The procedure is shown to possess the oracle property. To deal with the covariates missing, we consider the inverse probability weighted estimators for the linear model when the selection probability is known or unknown. It is shown that the estimator by using estimated selection probability has a smaller asymptotic variance than that with true selection probability, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for penalized rank estimator with the covariates missing in the linear model. Some numerical examples are provided to demonstrate the performance of the estimators.  相似文献   

14.
Monte Carlo simulations are performed for a broad range of conditions. These simulations indicate that the powers of alternative tests under the generalized MANOVA model for small samples differ significantly, if a large reduction of the number of polynomial parameters is applied. The results show that, if the response covariance matrix ∑ is known, the best alternative is to use ∑. If, however, ∑ is unknown, substitution of an identity matrix for ∑ is recommended. This alternative usually results in a test with more power than the test with the usual estimate of ∑ employing covariates or the test with an estimate of E obtained from another sample.  相似文献   

15.
This paper provides a review of the many applications of statistics within the field of phylogenetics, that is, the study of evolutionary history. The reader is assumed to be a statistician rather than a phylogeneticist, so some background is given on what phylogenetics is, along with a brief history of different approaches to phylogenetic inference. The latter half of the paper focuses on a series of open statistical problems in the field with the aim of encouraging more statisticians to engage with this fascinating area of research.  相似文献   

16.
An analysis of inter-rater agreement is presented. We study the problem with several raters using a Bayesian model based on the Dirichlet distribution. Inter-rater agreement, including global and partial agreement, is studied by determining the joint posterior distribution of the raters. Posterior distributions are computed with a direct resampling technique. Our method is illustrated with an example involving four residents, who are diagnosing 12 psychiatric patients suspected of having a thought disorder. Initially employing analytical and resampling methods, total agreement between the four is examined with a Bayesian testing technique. Later, partial agreement is examined by determining the posterior probability of certain orderings among the rater means. The power of resampling is revealed by its ability to compute complex multiple integrals that represent various posterior probabilities of agreement and disagreement between several raters.  相似文献   

17.
Consider a two-by-two factorial experiment with more than one replicate. Suppose that we have uncertain prior information that the two-factor interaction is zero. We describe new simultaneous frequentist confidence intervals for the four population cell means, with simultaneous confidence coefficient 1 ? α, that utilize this prior information in the following sense. These simultaneous confidence intervals define a cube with expected volume that (a) is relatively small when the two-factor interaction is zero and (b) has maximum value that is not too large. Also, these intervals coincide with the standard simultaneous confidence intervals obtained by Tukey’s method, with simultaneous confidence coefficient 1 ? α, when the data strongly contradict the prior information that the two-factor interaction is zero. We illustrate the application of these new simultaneous confidence intervals to a real data set.  相似文献   

18.
CEO作为企业投资决策的核心制定者,其自身特征无疑会对企业的研发活动产生重要影响。利用世界银行2005年对中国120个城市12 065家企业的调研数据,分析了CEO受教育水平、CEO任期、CEO自主权等三个因素对企业研发投入水平的影响,在控制了企业特征、行业等因素后,实证结果发现:(1)CEO受教育水平与企业研发投入水平显著正相关,即教育程度高的CEO更加重视研发活动;(2)CEO任期同企业研发投入水平显著正相关,即较长的任期有助于CEO着眼于长远目标,进而加大研发投入;(3)CEO自主权也同企业研发投入水平显著正相关,说明较高的CEO自主权有利于CEO通过努力实现自身人力资本价值,从而激励其加大研发的投入。  相似文献   

19.
20.
For some operable products with critical reliability constraints, it is important to estimate accurately their residual lives so that maintenance actions can be arranged suitably and efficiently. In the literature, most publications have dealt with this issue by only considering one-dimensional degradation data. However, this may be not reasonable in situations wherein a product may have two or more performance characteristics (PCs). In such situations, multi-dimensional degradation data should be taken into account. Here, for the target product with multivariate PCs, methods of residual life (RL) estimation are developed. This is done with the assumption that the degradation of PCs over time is governed by a multivariate Wiener process with nonlinear drifts. Both the population-based degradation information and the degradation history of the target product up-to-date are combined to estimate the RL of the product. Specifically, the population-based degradation information is first used to obtain the estimates of the unknown parameters of the model through the EM algorithm. Then, the degradation history of the target product is adopted to update the degradation model, based on which the RL is estimated accordingly. To illustrate the validity and the usefulness of the proposed method, a numerical example about fatigue cracks is finally presented and analysed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号