首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
We call a sample design that allows for different patterns, or sets, of data items to be collected from different sample units a Split Questionnaire Design (SQD). SQDs can be thought of as incorporating missing data into survey design. This paper examines the situation where data that are not collected by an SQD can be treated as Missing Completely At Random or Missing At Random, targets are regression coefficients in a generalised linear model fitted to binary variables, and targets are estimated using Maximum Likelihood. A key finding is that it can be easy to measure the relative contribution of a respondent to the accuracy of estimated model parameters before collecting all the respondent's model covariates. We show empirically and theoretically that we could achieve a significant reduction in respondent burden with a negligible impact on the accuracy of estimates by not collecting model covariates from respondents who we identify as contributing little to the accuracy of estimates. We discuss the general implications for SQDs.  相似文献   

2.
In this paper, gamma ( 5 ,2) distribution is considered as a failure model for the economic statistical design of x ¥ control charts. The study shows that the statistical performance of control charts can be improved significantly, with only a slight increase in the cost, by adding constraints to the optimization problem. The use of an economic statistical design instead of an economic design results in control charts that may be less expensive to implement, that have lower false alarm rates, and that have a higher probability of detecting process shifts. Numerical examples are presented to support this proposition. The results of economic statistical design are compared with those of a pure economic design. The effects of adding constraints for statistical performance measures, such as Type I error rate and the power of the chart, are extensively investigated.  相似文献   

3.
徐超等 《统计研究》2019,36(6):42-53
我国制造业金融化是当下备受关注的热点问题,如何遏制制造业企业“脱实向虚”是防范化解重大金融风险、推动经济高质量发展的关键所在。本文借助2009年增值税转型改革准自然实验捕捉实体税负的外生变化,基于A股上市公司数据实证考察了实体税负与制造业金融化之间的因果关系。研究发现,增值税转型改革引发的实体税负下降显著降低了制造业企业的金融化水平;改革对于重资产企业和融资约束较小的企业影响较大,对于轻资产企业和融资约束较大的企业影响则相对不明显。激励机制检验表明,增值税转型改革相对提高了制造业企业实体资产收益率,并引导企业加大了固定资产投资和研发创新投入。本文将为政府部门制定和实施去金融化的相关税收政策提供有益参考。  相似文献   

4.
Optimal designs for copula models   总被引:1,自引:0,他引:1  
E. Perrone 《Statistics》2016,50(4):917-929
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well.  相似文献   

5.
In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter estimates.Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are nonlinear making the task of maximum likelihood estimation harder. In this paper we develop an alternative approach exploiting the notion of an empirical likelihood. It is shown that within the framework of generalised linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter estimates using only unconstrained estimation. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to estimate annual birth probabilities by parity.  相似文献   

6.
Stepped wedge trials are increasingly adopted because practical constraints necessitate staggered roll-out. While a complete design requires clusters to collect data in all periods, resource and patient-centered considerations may call for an incomplete stepped wedge design to minimize data collection burden. To study incomplete designs, we expand the metric of information content to discrete outcomes. We operate under a marginal model with general link and variance functions, and derive information content expressions when data elements (cells, sequences, periods) are omitted. We show that the centrosymmetric patterns of information content can hold for discrete outcomes with the variance-stabilizing link function. We perform numerical studies under the canonical link function, and find that while the patterns of information content for cells are approximately centrosymmetric for all examined underlying secular trends, the patterns of information content for sequences or periods are more sensitive to the secular trend, and may be far from centrosymmetric.  相似文献   

7.
The purpose of this article is to present a new policy for designing an acceptance sampling plan based on the minimum proportion of the lot that should be inspected in the presence of inspection errors. It is assumed that inspection is not perfect and every defective item cannot be detected with complete certainty. The Bayesian method is used for obtaining the probability distribution function of the number of defective items in the lot. To design this model, two constraints of producer risk and consumer risk are considered during the inspection process by using two specified points on operating characteristic curve. In order to illustrate the application of the proposed model, an example is presented. In addition, a sensitivity analysis is performed to analyze the model performance under different scenarios of process parameters and the results are elaborated. Finally, the efficiency of the proposed model is compared with the sampling method of Spencer and Kevan de Lopez (2017) at the same conditions.  相似文献   

8.
A pragmatic approach to experimental design in industry   总被引:1,自引:1,他引:0  
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   

9.
It is well known that it is difficult to obtain an accurate optimal design for a mixture experimental design with complex constraints. In this article, we construct a random search algorithm which can be used to find the optimal design for mixture model with complex constraints. First, we generate an initial set by the Monte-Carlo method, and then run the random search algorithm to get the optimal set of points. After that, we explain the effectiveness of this method by using two examples.  相似文献   

10.
The importance of statistically designed experiments in industry has been well recognized. However, the use of 'design of experiments' is still not pervasive, owing in part to the inefficient learning process experienced by many non-statisticians. In this paper, the nature of design of experiments, in contrast to the usual statistical process control techniques, is discussed. It is then pointed out that for design of experiments to be appreciated and applied, appropriate approaches should be taken in training, learning and application. Perspectives based on the concepts of objective setting and design under constraints can be used to facilitate the experimenters' formulation of plans for collection, analysis and interpretation of empirical information. A review is made of the expanding role of design of experiments in the past several decades, with comparisons made of the various formats and contexts of experimental design applications, such as Taguchi methods and Six Sigma. The trend of development shows that, from the realm of scientific research to business improvement, the competitive advantage offered by design of experiments is being increasingly felt.  相似文献   

11.
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples that can be collected in individuals is limited and the sampling times must be as flexible as possible. Therefore to optimize joint design for several drugs (i.e., to determine a compromise between informative times that best characterize each drug's kinetics), we proposed to use a compound optimality criterion based on the expected population Fisher information matrix in nonlinear mixed effect models. This criterion allows weighting different models, which might be useful to take into account the importance accorded to each target in a phenotyping test. We also computed windows around the optimal times based on recursive random sampling and Monte‐Carlo simulation while maintaining a reasonable level of efficiency for parameter estimation. We illustrated this strategy for two drugs often included in phenotyping cocktails, midazolam (probe for CYP3A) and digoxin (P‐glycoprotein), based on the data of a previous study, and were able to find a sparse and flexible design. The obtained design was evaluated by clinical trial simulations and shown to be efficient for the estimation of population and individual parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Robustness of group divisible (GD) designs is investigated, when one block is lost, in terms of efficiency of the residual design. The exact evaluation of the efficiency can be made for singular GD and semi-regular GD designs as ell as regular GD designs with λ1 = 0. In a regular GD design with λ1 > 0, the efficiency may depend upon the lost block and sharp upper and lower bounds on the efficiency are presented. The investigation shows that GD designs are fairly robust in terms of efficiency. As a special case, we can also show the robustness of balanced incomplete block design when one block is lost.  相似文献   

13.
ABSTRACT

The generalized case-cohort design is widely used in large cohort studies to reduce the cost and improve the efficiency. Taking prior information of parameters into consideration in modeling process can further raise the inference efficiency. In this paper, we consider fitting proportional hazards model with constraints for generalized case-cohort studies. We establish a working likelihood function for the estimation of model parameters. The asymptotic properties of the proposed estimator are derived via the Karush-Kuhn-Tucker conditions, and their finite properties are assessed by simulation studies. A modified minorization-maximization algorithm is developed for the numerical calculation of the constrained estimator. An application to a Wilms tumor study demonstrates the utility of the proposed method in practice.  相似文献   

14.
Exchange algorithms for constructing large spatial designs   总被引:2,自引:0,他引:2  
Exchange algorithms are widely used for optimizing design criteria and have proven to be highly effective. However, application of these algorithms can be computationally prohibitive for very large design problems, and in situations for which the design criterion is sufficiently complex so as to prevent efficient evaluation strategies. This paper reports on several modifications to exchange algorithms that lead to large reductions in the computational burden associated with optimizing any given design criterion. A small study indicates that these modifications do not significantly impact the quality of designs, while reductions in effort of over 90% can be achieved.  相似文献   

15.
In nonlinear regression problems, the assumption is usually made that parameter estimates will be approximately normally distributed. The accuracy of the approximation depends on the sample size and also on the intrinsic and parameter-effects curvatures. Based on these curvatures, criteria are defined here that indicate whether or not an experiment will lead to estimates with distributions well approximated by a normal distribution. An approach is motivated of optimizing a primary design criterion subject to satisfying constraints based on these nonnormality measures. The approach can be used either to I) find designs for a fixed sample size or to II) choose the sample size for the optimal design based on the primary objective so that the constraints are satisfied. This later objective is useful as the nonnormality measures decrease with the sample size. As the constraints are typically not concave functions over a set of design measures, the usual equivalence theorems of optimal design theory do not hold for the first approach, and numerical implementation is required. Examples are given, and a new notation using tensor products is introduced to define tractable general notation for the nonnormality measures.  相似文献   

16.
This paper considers the problem of estimating a nonlinear statistical model subject to stochastic linear constraints among unknown parameters. These constraints represent prior information which originates from a previous estimation of the same model using an alternative database. One feature of this specification allows for the disign matrix of stochastic linear restrictions to be estimated. The mixed regression technique and the maximum likelihood approach are used to derive the estimator for both the model coefficients and the unknown elements of this design matrix. The proposed estimator whose asymptotic properties are studied, contains as a special case the conventional mixed regression estimator based on a fixed design matrix. A new test of compatibility between prior and sample information is also introduced. Thesuggested estimator is tested empirically with both simulated and actual marketing data.  相似文献   

17.
This paper considers the problem of estimating a nonlinear statistical model subject to stochastic linear constraints among unknown parameters. These constraints represent prior information which originates from a previous estimation of the same model using an alternative database. One feature of this specification allows for the disign matrix of stochastic linear restrictions to be estimated. The mixed regression technique and the maximum likelihood approach are used to derive the estimator for both the model coefficients and the unknown elements of this design matrix. The proposed estimator whose asymptotic properties are studied, contains as a special case the conventional mixed regression estimator based on a fixed design matrix. A new test of compatibility between prior and sample information is also introduced. Thesuggested estimator is tested empirically with both simulated and actual marketing data.  相似文献   

18.
Genichi Taguchi has emphasized the use of designed experiments in several novel and important applications. In this paper we focus on the use of statistical experimental designs in designingproducts to be robust to environmental conditions. The engineering concept of robust product design is very important because it is frequently impossible or prohibitively expensive to control or eliminate variation resulting from environmental conditions. Robust product design enablesthe experimenter to discover how to modify the design of the product to minimize the effect dueto variation from environmental sources. In experiments of this kind, Taguchi's total experimental arrangement consists of a cross-product of two experimental designs:an inner array containing the design factors and an outer array containing the environmental factors. Except in situations where both these arrays are small, this arrangement may involve a prohibitively large amount of experimental work. One of the objectives of this paper is to show how this amount of work can be reduced. In this paper we investigate the applicability of split-plot designs for thisparticular experimental situation. Consideration of the efficiency of split-plot designs and anexamination of several variants of split-plot designs indicates that experiments conductedin a split-plot mode can be of tremendous value in robust product design since they not only enable the contrasts of interest to be estimated efficiently but also the experiments can be considerably easier to conduct than the designs proposed by Taguchi.  相似文献   

19.
Smoothing splines are known to exhibit a type of boundary bias that can reduce their estimation efficiency. In this paper, a boundary corrected cubic smoothing spline is developed in a way that produces a uniformly fourth order estimator. The resulting estimator can be calculated efficiently using an O(n) algorithm that is designed for the computation of fitted values and associated smoothing parameter selection criteria. A simulation study shows that use of the boundary corrected estimator can improve estimation efficiency in finite samples. Applications to the construction of asymptotically valid pointwise confidence intervals are also investigated .  相似文献   

20.
In a rank-order choice-based conjoint experiment, the respondent is asked to rank a number of alternatives of a number of choice sets. In this paper, we study the efficiency of those experiments and propose a D-optimality criterion for rank-order experiments to find designs yielding the most precise parameter estimators. For that purpose, an expression of the Fisher information matrix for the rank-ordered conditional logit model is derived which clearly shows how much additional information is provided by each extra ranking step. A simulation study shows that, besides the Bayesian D-optimal ranking design, the Bayesian D-optimal choice design is also an appropriate design for this type of experiments. Finally, it is shown that considerable improvements in estimation and prediction accuracy are obtained by including extra ranking steps in an experiment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号