首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 422 毫秒
1.
We discuss in the present paper the analysis of heteroscedastic regression models and their applications to off-line quality control problems. It is well known that the method of pseudo-likelihood is usually preferred to full maximum likelihood since estimators of the parameters in the regression function obtained are more robust to misspecification of the variance function. Despite its popularity, however, existing theoretical results are difficult to apply and are of limited use in many applications. Using more recent results in estimating equations, we obtain an efficient algorithm for computing the pseudo-likelihood estimator with desirable convergence properties and also derive simple, explicit and easy to apply asymptotic results. These results are used to look in detail at variance minimization in off-line quality control, yielding techniques of inferences for the optimized design parameter. In application of some existing approaches to off-line quality control, such as the dual response methodology, rigorous statistical inference techniques are scarce and difficult to obtain. An example of off-line quality control is presented to discuss the practical aspects involved in the application of the results obtained and to address issues such as data transformation, model building and the optimization of design parameters. The analysis shows very encouraging results, and is seen to be able to unveil some important information not found in previous analyses.  相似文献   

2.
In this work we present a flexible class of linear models to treat observations made in discrete time and continuous space, where the regression coefficients vary smoothly in time and space. This kind of model is particularly appealing in situations where the effect of one or more explanatory processes on the response present substantial heterogeneity in both dimensions. We describe how to perform inference for this class of models and also how to perform forecasting in time and interpolation in space, using simulation techniques. The performance of the algorithm to estimate the parameters of the model and to perform prediction in time is investigated with simulated data sets. The proposed methodology is used to model pollution levels in the Northeast of the United States.  相似文献   

3.
The evaluation of hazards from complex, large scale, technologically advanced systems often requires the construction of computer implemented mathematical models. These models are used to evaluate the safety of the systems and to evaluate the consequences of modifications to the systems. These evaluations, however, are normally surrounded by significant uncertainties related to the uncertainty inherent in natural phenomena such as the weather and those related to uncertainties in the parameters and models used in the evaluation.

Another use of these models is to evaluate strategies for improving information used in the modeling process itself. While sensitivity analysis is useful in defining variables in the model that are important, uncertainty analysis provides a tool for assessing the importance of uncertainty about these variables. A third complementary technique, is decision analysis. It provides a methodology for explicitly evaluating and ranking potential improvements to the model. Its use in the development of information gathering strategies for a nuclear waste repository are discussed in this paper.  相似文献   

4.
Summary.  We consider the problem of obtaining population-based inference in the presence of missing data and outliers in the context of estimating the prevalence of obesity and body mass index measures from the 'Healthy for life' study. Identifying multiple outliers in a multivariate setting is problematic because of problems such as masking, in which groups of outliers inflate the covariance matrix in a fashion that prevents their identification when included, and swamping, in which outliers skew covariances in a fashion that makes non-outlying observations appear to be outliers. We develop a latent class model that assumes that each observation belongs to one of K unobserved latent classes, with each latent class having a distinct covariance matrix. We consider the latent class covariance matrix with the largest determinant to form an 'outlier class'. By separating the covariance matrix for the outliers from the covariance matrices for the remainder of the data, we avoid the problems of masking and swamping. As did Ghosh-Dastidar and Schafer, we use a multiple-imputation approach, which allows us simultaneously to conduct inference after removing cases that appear to be outliers and to promulgate uncertainty in the outlier status through the model inference. We extend the work of Ghosh-Dastidar and Schafer by embedding the outlier class in a larger mixture model, consider penalized likelihood and posterior predictive distributions to assess model choice and model fit, and develop the model in a fashion to account for the complex sample design. We also consider the repeated sampling properties of the multiple imputation removal of outliers.  相似文献   

5.
国有企业改革与工资支付结构变革的面板数据分析   总被引:2,自引:0,他引:2       下载免费PDF全文
本文拟考察在1988-2002年期间国企改革对中国城镇工资支付结构变革的影响、特别是工资决定因素的变化。我们发现对教育的回报在增长,而对工作经验的回报在下降。2002年的数据表明:不断扩大的性别工资差距以及不断增加的非市场因素工资溢价可能均已不再上升。国企改革以及重工业在总体产业结构中比重的不断降低对这些部门职工的工资均已产生影响。我们使用1998-2002年回顾性面板数据提供了关于部门所有制结构、是否是党员以及失业等变量对工资影响的固定效应估计。  相似文献   

6.
This article uses Bayesian marginal likelihood analysis to compare univariate models of the stock return behavior and test for structural breaks in the equity premium. The analysis favors a model that relates the equity premium to Markov-switching changes in the level of market volatility and accommodates volatility feedback. For this model, there is evidence of a one-time structural break in the equity premium in the 1940s, with no evidence of additional breaks in the postwar period. The break in the 1940s corresponds to a permanent reduction in the general level of stock market volatility. Meanwhile, there appears to be no change in the underlying risk preferences relating the equity premium to market volatility. The estimated unconditional equity premium drops from an annualized 12% before to the break to 9% after the break.  相似文献   

7.
Statistics and statisticians have contributed to industry. Attention has been given to the appropriate training of statisticians for careers in industry. Statistics is used with benefit to industry in quality control, experimental design in research and development, and management decisions. Statistics can benefit from industry's assistance, the subject of this article. Five premises are set forth, three of them suggesting problems for statistics and industry. They relate to public understanding of statistics, the recruitment of students to statistics, the recruitment of statisticians to industry, and the relevance of research in statistics to the needs of industry. Thirteen recommendations are made on what industry can do for statistics and suggestions are made on how the American Statistical Association and the American Society for Quality Control can provide leadership in meeting the problems posed with the help of industry.  相似文献   

8.
Summary.  The system for monitoring suicides in Hong Kong has considerable delays in reporting as the cause of death needs to be determined by a coroner's investigation. However, timely estimates of suicide rates are desirable to assist in the formulation of public health policies. This motivated us to develop a non-parametric procedure to estimate the intensity function of a Poisson process in the presence of reporting delays. We give closed form estimators of the Poisson intensity and the delay distribution, conduct simulation studies to evaluate the method proposed and derive their asymptotic properties. The method proposed is applied to estimate the intensity of suicide in Hong Kong.  相似文献   

9.
Summary.  The 2001 census in the UK asked for a return of people 'usually living at this address'. But this phrase is fuzzy and may have led to undercount. In addition, analysis of the sex ratios in the 2001 census of England and Wales points to a sex bias in the adjustments for net undercount—too few males in relation to females. The Office for National Statistics's abandonment of the method of demographic analysis for the population of working ages has allowed these biases to creep in. The paper presents a demographic account to check on the plausibility of census results. The need to revise preliminary estimates of the national population over a period of years following census day—as experienced in North America and now in the UK—calls into question the feasibility of a one-number census. Looking to the future, the environment for taking a reliable census by conventional methods is deteriorating. The UK Government's proposals for a population register open up the possibility of a Nordic-style administrative record census in the longer term.  相似文献   

10.
We consider a problem of estimating the minimum effective and peak doses in the presence of covariates. We propose a sequential strategy for subject assignment that includes an adaptive randomization component to balance the allocation to placebo and active doses with respect to covariates. We conclude that either adjusting for covariates in the model or balancing allocation with respect to covariates is required to avoid bias in the target dose estimation. We also compute optimal allocation to estimate the minimum effective and peak doses in discrete dose space using isotonic regression.  相似文献   

11.
Many new anticancer agents can be combined with existing drugs, as combining a number of drugs may be expected to have a better therapeutic effect than monotherapy owing to synergistic effects. Furthermore, to drive drug development and to reduce the associated cost, there has been a growing tendency to combine these as phase I/II trials. With respect to phase I/II oncology trials for the assessment of dose combinations, in the existing methodologies in which efficacy based on tumor response and safety based on toxicity are modeled as binary outcomes, it is not possible to enroll and treat the next cohort of patients unless the best overall response has been determined in the current cohort. Thus, the trial duration might be potentially extended to an unacceptable degree. In this study, we proposed a method that randomizes the next cohort of patients in the phase II part to the dose combination based on the estimated response rate using all the available observed data upon determination of the overall response in the current cohort. We compared the proposed method to the existing method using simulation studies. These demonstrated that the percentage of optimal dose combinations selected in the proposed method is not less than that in the existing method and that the trial duration in the proposed method is shortened compared to that in the existing method. The proposed method meets both ethical and financial requirements, and we believe it has the potential to contribute to expedite drug development.  相似文献   

12.
中国非正规经济的就业效应研究———基于投入产出模型   总被引:1,自引:0,他引:1  
刘波 《统计研究》2021,38(2):87-98
随着我国经济由高速增长向高质量发展转型,非正规经济在促进我国劳动就业方面的作用不断凸显。本文通过编制2002-2017年包含非正规经济部门的投入产出序列表,采用投入产出模型,定量测度制造业、建筑业、批发零售住宿餐饮业、交通运输仓储邮政业、居民服务其他服务业等5个行业非正规部门发展对我国劳动就业的直接和间接效应。研究结果显示:①各行业非正规部门对就业的直接贡献均高于同期同行业的正规部门,而且以批发零售住宿餐饮业为主的第三产业非正规部门的就业效应高于以制造业、建筑业为主的第二产业非正规部门;②由于制造业和建筑业两个行业非正规部门均存在较强的后向关联效应,因而对就业的间接贡献高于以批发零售住宿餐饮业为代表的第三产业非正规部门;③各行业非正规部门产出变化对就业的间接贡献主要集中于农林牧渔业、制造业、批发零售住宿餐饮业、租赁和商务服务业等行业,在建筑业、交通运输仓储邮政业、居民服务其他服务业等行业的表现不容乐观; ④动态来看,样本期内5个行业非正规部门产出变化对就业的直接和间接贡献均呈下降态势。  相似文献   

13.
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged.  相似文献   

14.
Summary.  A common application of multilevel models is to apportion the variance in the response according to the different levels of the data. Whereas partitioning variances is straightforward in models with a continuous response variable with a normal error distribution at each level, the extension of this partitioning to models with binary responses or to proportions or counts is less obvious. We describe methodology due to Goldstein and co-workers for apportioning variance that is attributable to higher levels in multilevel binomial logistic models. This partitioning they referred to as the variance partition coefficient. We consider extending the variance partition coefficient concept to data sets when the response is a proportion and where the binomial assumption may not be appropriate owing to overdispersion in the response variable. Using the literacy data from the 1991 Indian census we estimate simple and complex variance partition coefficients at multiple levels of geography in models with significant overdispersion and thereby establish the relative importance of different geographic levels that influence educational disparities in India.  相似文献   

15.
Data collection process in most observational and experimental studies yield different types of variables, leading to the use of joint models that are capable of handling multiple data types. Evaluation of various statistical techniques that have been developed for mixed data in simulated environments requires concurrent generation of multiple variables. In this article, I present an important augmentation to a unified framework proposed in our previously published work for simultaneously generating binary and nonnormal continuous data given the marginal characteristics and correlation structure, via fifth-order power polynomials that are known to extend the area covered in the skewness-elongation plane and to provide a better approximation to the probability density function of the continuous variables. I evaluate how well the improved methodology performs in comparison to the original one, in a simulated setting with illustrations of algorithmic steps. Although the relative gains for the associational quantities are not substantial, the augmented version appears to better capture the marginal quantities that are pertinent to the higher-order moments, as indicated by very close resemblance between the specified and empirically computed quantities on average.  相似文献   

16.
To quantify uncertainty in a formal manner, statisticians play a vital role in identifying a prior distribution for a Bayesian‐designed clinical trial. However, when expert beliefs are to be used to form the prior, the literature is sparse on how feasible and how reliable it is to elicit beliefs from experts. For late‐stage clinical trials, high importance is placed on reliability; however, feasibility may be equally important in early‐stage trials. This article describes a case study to assess how feasible it is to conduct an elicitation session in a structured manner and to form a probability distribution that would be used in a hypothetical early‐stage trial. The case study revealed that by using a structured approach to planning, training and conduct, it is feasible to elicit expert beliefs and form a probability distribution in a timely manner. We argue that by further increasing the published accounts of elicitation of expert beliefs in drug development, there will be increased confidence in the feasibility of conducting elicitation sessions. Furthermore, this will lead to wider dissemination of the pertinent issues on how to quantify uncertainty to both practicing statisticians and others involved with designing trials in a Bayesian manner. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
万海远等 《统计研究》2020,37(4):87-100
在老龄化加速和劳动年龄人口下降的背景下,开发我国老年劳动力资源具有重要意义。本文基于1988-2013年住户调查数据,通过与俄罗斯较长时期的跨国比较,发现中国老年人就业率偏低且持续下降,并进一步利用Oaxaca方法分解了就业率下降的贡献来源,由此解释了中国城镇老年人就业率较低的原因。研究发现,文化和制度因素的影响并不显著,而劳动力市场禀赋及其分化才是我国老年人就业率较低且持续下降的主要原因。相对于俄罗斯,中国城镇居民财富积累更快,收入结构更加多元化,再加上年轻外来劳动力涌入形成的挤出效应,带来中国老年人就业意愿和就业竞争力的双重下降,由此就使得中国老年人就业率逐渐降低。因此要打破劳动力市场分割,让老年劳动力顺利实现职业转换和工作岗位转变;改善小微企业营商环境,扩大老年人非正规就业机会;增加老年人职业培训以提高就业竞争力,鼓励获得较高教育技能的老年人工作更长的年限,并成为劳动力市场的重要补充。  相似文献   

18.
范超  王雪琪 《统计研究》2016,33(8):95-100
房价收入比是反映居民购房可支付能力的重要指标。为了更真实准确反映我国居民长期承受的购房负担,本文基于持久收入假说,利用我国35个大中城市数据,建立状态空间模型,估计出持久收入意义下的房价收入比,通过情景分析确定其合理上限,并分析主要特征。研究表明:①我国房价-持久收入比的合理上限为7.6,2002-2013年35个大中城市的房价-持久收入比均值是9.2,其中28个城市已超过该上限,比值最高的北京已达到14.9;②城市越发达,则房价-持久收入比越高,居民需要承受的购房压力越大,且在时间趋势上,一线与二三线城市间的差距呈现扩大趋势;③在地理分布上,我国东部、中部、东北地区、西部大中城市的房价-持久收入比呈现从高到低的排列顺序;④相比于传统方法中根据可支配收入测算的房价收入比,房价-持久收入比与其约有10%的差异。当前我国政府应采取有效措施继续限制房价,减轻居民购房负担。  相似文献   

19.
Summary.  Survey organizations often attempt to 'convert' sample members who refuse to take part in a survey. Persuasive techniques are used in an effort to change the refusers' minds and to agree to an interview. This is done to improve the response rate and, possibly, to reduce non-response bias. However, refusal conversion attempts are expensive and must be justified. Previous studies of the effects of refusal conversion attempts are few and have been restricted to cross-sectional surveys. The criteria for 'success' of a refusal conversion attempt are different in a longitudinal survey, where for many purposes the researcher requires complete data over multiple waves. The paper uses data from the British Household Panel Survey from 1994 to 2003 to assess the long-term effectiveness of refusal conversion procedures in terms of sample sizes, sample composition and data quality.  相似文献   

20.
The problem of interpreting lung-function measurements in industrial workers is examined. Two common lung-function measurements (FEV1, and FVC) are described. The standard method currently used in the analysis of such cross-sectional survey data is discussed. The basic assumption of a linear decline with age is questioned on the basis of large sets of data from a variety of industries in British Columbia. It is shown that, while the linear assumption holds approximately in unexposed. healthy nonsmoking individuals, a quadratic age effect is often observed in smokers and/or in individuals who are industrially exposed to certain fumes or dusts. Recognizing this accelerated rate of deterioration in the lungs is of fundamental importance both to the identification of affected individuals and to the understanding of the process involved. An attempt is made to interpret the variety of nonlinear situations observed, by appealing to population selection mechanisms, individual variations in susceptibility, and the effects due to various levels of stimulus strength.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号