首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Although the ecologic effects of acid rain have been widely reported, relatively little is known about the effects of acidic air pollution on human health. Some epidemiologic and animal studies suggest, however, that acidity is an important determinant of the respiratory health effects of aerosols. This paper reviews some of that evidence and discusses its implications for the design and analysis of epidemiologic studies. We contrast two types of exposure patterns: peak exposures associated with air pollution episodes, and chronic exposures resulting from persistently high levels of air pollutants. Recent work on the analysis of repeated categorical outcome variables provides new methods for the analysis of episode studies. Studies of long-term exposure require comparisons among population groups, and these comparisons can be subject to the design effects characteristic of multistage sample surveys. We examine the implications of these design effects for epidemiologic studies. Finally, the paper discusses the measurement errors induced by the use of outdoor measurements to quantify personal exposure to air pollutants. Recent work on methods for errors-in-variables problems may aid in assessing the effects of such errors on conventional analyses of air-pollution studies.  相似文献   

2.
Summary.  Using mobile phones to conduct survey interviews has gathered momentum recently. However, using mobile telephones in surveys poses many new challenges. One important challenge involves properly classifying final case dispositions to understand response rates and non-response error and to implement responsive survey designs. Both purposes demand accurate assessments of the outcomes of individual call attempts. By looking at actual practices across three countries, we suggest how the disposition codes of the American Association for Public Opinion Research, which have been developed for telephone surveys, can be modified to fit mobile phones. Adding an international dimension to these standard definitions will improve survey methods by making systematic comparisons across different contexts possible.  相似文献   

3.
章国华 《统计研究》2008,25(11):96-99
 随着手机的普及,手机调查将会继网络调查之后的又一种新的调查方法,并具有一定的现实意义。本文主要探讨了手机调查的概念和分类,着重介绍了手机调查的优势,并提出了手机调查的应用设想,同时也阐述了手机调查所存在的主要问题以及对策,最后提出了发展手机调查的若干建议。  相似文献   

4.
The maximum likelihood estimator (MLE) in nonlinear panel data models with fixed effects is widely understood (with a few exceptions) to be biased and inconsistent when T, the length of the panel, is small and fixed. However, there is surprisingly little theoretical or empirical evidence on the behavior of the estimator on which to base this conclusion. The received studies have focused almost exclusively on coefficient estimation in two binary choice models, the probit and logit models. In this note, we use Monte Carlo methods to examine the behavior of the MLE of the fixed effects tobit model. We find that the estimator's behavior is quite unlike that of the estimators of the binary choice models. Among our findings are that the location coefficients in the tobit model, unlike those in the probit and logit models, are unaffected by the “incidental parameters problem.” But, a surprising result related to the disturbance variance emerges instead - the finite sample bias appears here rather than in the slopes. This has implications for estimation of marginal effects and asymptotic standard errors, which are also examined in this paper. The effects are also examined for the probit and truncated regression models, extending the range of received results in the first of these beyond the widely cited biases in the coefficient estimators.  相似文献   

5.
New data collection and storage technologies have given rise to a new field of streaming data analytics, called real-time statistical methodology for online data analyses. Most existing online learning methods are based on homogeneity assumptions, which require the samples in a sequence to be independent and identically distributed. However, inter-data batch correlation and dynamically evolving batch-specific effects are among the key defining features of real-world streaming data such as electronic health records and mobile health data. This article is built under a state-space mixed model framework in which the observed data stream is driven by a latent state process that follows a Markov process. In this setting, online maximum likelihood estimation is made challenging by high-dimensional integrals and complex covariance structures. In this article, we develop a real-time Kalman-filter-based regression analysis method that updates both point estimates and their standard errors for fixed population average effects while adjusting for dynamic hidden effects. Both theoretical justification and numerical experiments demonstrate that our proposed online method has statistical properties similar to those of its offline counterpart and enjoys great computational efficiency. We also apply this method to analyze an electronic health record dataset.  相似文献   

6.
Linear mixed models based on the normality assumption are widely used in health related studies. Although the normality assumption leads to simple, mathematically tractable, and powerful tests, violation of the assumption may easily invalidate the statistical inference. Transformation of variables is sometimes used to make normality approximately true. In this paper we consider another approach by replacing the normal distributions in linear mixed models by skew-t distributions, which account for skewness and heavy tails for both the random effects and the errors. The full likelihood-based estimator is often difficult to use, but a 3-step estimation procedure is proposed, followed by an application to the analysis of deglutition apnea duration in normal swallows. The example shows that skew-t models often entail more reliable inference than Gaussian models for the skewed data.  相似文献   

7.
The effect of social mobility on the socioeconomic differential in mortality is examined with data from the Office for National Statistics Longitudinal Study. The analyses involve 46 980 men aged 45–64 years in 1981. The mortality risk of the socially mobile is compared with the mortality risk of the socially stable after adjustment for their class of origin (their social class in 1971) and class of destination (their social class in 1981) separately. Among those in employment there is some evidence that movement out of their class of origin is in the direction predicted by the idea of health-related social mobility. This evidence, however, seems strongest for causes of death which are least likely to have been preceded by prolonged incapacity. Movement into the class of destination, however, shows the opposite relationship with mortality. Compared with the socially stable members of their class of destination, the upwardly mobile tend to have higher mortality and the downwardly mobile tend to have lower mortality. This relationship with the class of destination, it is suggested, may explain why socioeconomic mortality differentials do not widen with increasing age.  相似文献   

8.
Success of the recently implemented Affordable Care Act hinges on previously uninsured young adults enrolling in coverage. How will increased coverage, in turn, affect health care utilization? This paper applies variable coefficient panel models to estimate the impact of insurance on health care utilization among young adults. The econometric setup, which accommodates nonlinear usage measures, attempts to address the potential endogeneity of insurance status. The main finding is that, for approximately one-fifth of young adults, insurance does not substantially alter health care consumption. On the other hand, another one-fifth of young adults have large moral hazard effects. Among that group, insurance increases the probability of having a routine checkup by 71–120%, relative to mean probabilities, and insurance increases the number of curative-based doctor office visits by 67–181%, relative to the mean number of visits.  相似文献   

9.
We provide a detailed statistical investigation into the economic and demographic factors that determine sporting participation in England. Using data from the 1997 health survey of England we fit random-effects probit models that take into account unobservable household preferences for sporting activities, as well as the economic and demographic characteristics of respondents. Our main results from the multivariate analysis are that sporting participation is positively related to household income, the educated participate in sports to a greater extent than the uneducated, there is no evidence of regional differentials in sporting participation and household preferences play an important role in the decision to participate in sports.  相似文献   

10.
Crossover designs, or repeated measurements designs, are used for experiments in which t treatments are applied to each of n experimental units successively over p time periods. Such experiments are widely used in areas such as clinical trials, experimental psychology and agricultural field trials. In addition to the direct effect on the response of the treatment in the period of application, there is also the possible presence of a residual, or carry-over, effect of a treatment from one or more previous periods. We use a model in which the residual effect from a treatment depends upon the treatment applied in the succeeding period; that is, a model which includes interactions between the treatment direct and residual effects. We assume that residual effects do not persist further than one succeeding period.A particular class of strongly balanced repeated measurements designs with n=t2 units and which are uniform on the periods is examined. A lower bound for the A-efficiency of the designs for estimating the direct effects is derived and it is shown that such designs are highly efficient for any number of periods p=2,…,2t.  相似文献   

11.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
Summary. The cumulative number of human immunodeficiency virus (HIV) infections worldwide has reached 60 million in little over 30 years. HIV continues to spread despite a detailed understanding of the manner in which it spreads and measures which can prevent spread. Some governments have been highly successful in containing the spread of HIV through blood products and from mother to child and among injecting drug users. Lack of political will, lack of resources or challenges to widely accepted scientific evidence have held back similar interventions in other countries. It has proved much more difficult to reduce the sexual transmission of HIV in both high and low income countries. A wide range of strategies has been identified but it remains unclear which strategies deserve priority and what methods of promoting them have the greatest effect. There is ample evidence that awareness of HIV and changes in sexual behaviour have occurred widely but the penetration of information remains poor in some vulnerable groups especially adolescents and women in poorer countries. Further obstacles face those who have information about the risk. The subordinate position of women and a desire for large families are important obstacles to condom negotiation and use. Urbanization, poverty, conflict and declining public services all exacerbate unsafe sexual behaviour. We argue that so-called 'structural' interventions directed at these wider contexts of unsafe behaviour merit greater attention. Such approaches have the added benefit of being less susceptible to 'risk compensation' which has the potential to undermine strategies directed at reducing the transmission efficiency of HIV.  相似文献   

13.
Nonlinear mixed‐effects models are being widely used for the analysis of longitudinal data, especially from pharmaceutical research. They use random effects which are latent and unobservable variables so the random‐effects distribution is subject to misspecification in practice. In this paper, we first study the consequences of misspecifying the random‐effects distribution in nonlinear mixed‐effects models. Our study is focused on Gauss‐Hermite quadrature, which is now the routine method for calculation of the marginal likelihood in mixed models. We then present a formal diagnostic test to check the appropriateness of the assumed random‐effects distribution in nonlinear mixed‐effects models, which is very useful for real data analysis. Our findings show that the estimates of fixed‐effects parameters in nonlinear mixed‐effects models are generally robust to deviations from normality of the random‐effects distribution, but the estimates of variance components are very sensitive to the distributional assumption of random effects. Furthermore, a misspecified random‐effects distribution will either overestimate or underestimate the predictions of random effects. We illustrate the results using a real data application from an intensive pharmacokinetic study.  相似文献   

14.
Growing concern about the health effects of exposure to pollutants and other chemicals in the environment has stimulated new research to detect and quantify environmental hazards. This research has generated many interesting and challenging methodological problems for statisticians. One type of statistical research develops new methods for the design and analysis of individual studies. Because current research of this type is too diverse to summarize in a single article, we discuss current work in two areas of application: the carcinogen bioassay in small rodents and epidemiologic studies of air pollution. To assess the risk of a potentially harmful agent, one must frequently combine evidence from different and often quite dissimilar studies. Hence, this paper also discusses the central role of data synthesis in risk assessment, reviews some of the relevant statistical literature, and considers the role of statisticians in evaluating and combining evidence from diverse sources.  相似文献   

15.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   

16.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

17.
Multivariate combination-based permutation tests have been widely used in many complex problems. In this paper we focus on the equipower property, derived directly from the finite-sample consistency property, and we analyze the impact of the dependency structure on the combined tests. At first, we consider the finite-sample consistency property which assumes that sample sizes are fixed (and possibly small) and considers on each subject a large number of informative variables. Moreover, since permutation test statistics do not require to be standardized, we need not assume that data are homoscedastic in the alternative. The equipower property is then derived from these two notions: consider the unconditional permutation power of a test statistic T for fixed sample sizes, with V ? 2 independent and identically distributed variables and fixed effect δ, calculated in two ways: (i) by considering two V-dimensional samples sized m1 and m2, respectively; (ii) by considering two unidimensional samples sized n1 = Vm1 and n2 = Vm2, respectively. Since the unconditional power essentially depends on the non centrality induced by T, and two ways are provided with exactly the same likelihood and the same non centrality, we show that they are provided with the same power function, at least approximately. As regards both investigating the equipower property and the power behavior in presence of correlation we performed an extensive simulation study.  相似文献   

18.
ABSTRACT

Stepwise regression building procedures are commonly used applied statistical tools, despite their well-known drawbacks. While many of their limitations have been widely discussed in the literature, other aspects of the use of individual statistical fit measures, especially in high-dimensional stepwise regression settings, have not. Giving primacy to individual fit, as is done with p-values and R2, when group fit may be the larger concern, can lead to misguided decision making. One of the most consequential uses of stepwise regression is in health care, where these tools allocate hundreds of billions of dollars to health plans enrolling individuals with different predicted health care costs. The main goal of this “risk adjustment” system is to convey incentives to health plans such that they provide health care services fairly, a component of which is not to discriminate in access or care for persons or groups likely to be expensive. We address some specific limitations of p-values and R2 for high-dimensional stepwise regression in this policy problem through an illustrated example by additionally considering a group-level fairness metric.  相似文献   

19.
Typical panel data models make use of the assumption that the regression parameters are the same for each individual cross-sectional unit. We propose tests for slope heterogeneity in panel data models. Our tests are based on the conditional Gaussian likelihood function in order to avoid the incidental parameters problem induced by the inclusion of individual fixed effects for each cross-sectional unit. We derive the Conditional Lagrange Multiplier test that is valid in cases where N → ∞ and T is fixed. The test applies to both balanced and unbalanced panels. We expand the test to account for general heteroskedasticity where each cross-sectional unit has its own form of heteroskedasticity. The modification is possible if T is large enough to estimate regression coefficients for each cross-sectional unit by using the MINQUE unbiased estimator for regression variances under heteroskedasticity. All versions of the test have a standard Normal distribution under general assumptions on the error distribution as N → ∞. A Monte Carlo experiment shows that the test has very good size properties under all specifications considered, including heteroskedastic errors. In addition, power of our test is very good relative to existing tests, particularly when T is not large.  相似文献   

20.
Propensity score matching (PSM) has been widely used to reduce confounding biases in observational studies. Its properties for statistical inference have also been investigated and well documented. However, some recent publications showed concern of using PSM, especially on increasing postmatching covariate imbalance, leading to discussion on whether PSM should be used or not. We review empirical and theoretical evidence for and against its use in practice and revisit the property of equal percent bias reduction and adapt it to more practical situations, showing that PSM has some additional desirable properties. With a small simulation, we explore the impact of caliper width on biases due to mismatching in matched samples and due to the difference between matched and target populations and show some issue of PSM may be due to inadequate caliper selection. In summary, we argue that the right question should be when and how to use PSM rather than to use or not to use it and give suggestions accordingly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号