共查询到20条相似文献,搜索用时 287 毫秒
1.
2.
Arijit Chaudhuri Tasos C. Christofides Amitava Saha 《Statistical Methods and Applications》2009,18(3):389-418
In estimating the proportion of people bearing a sensitive attribute A, say, in a given community, following Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, certain randomized response
(RR) techniques are available for application. These are intended to ensure efficient and unbiased estimation protecting a
respondent’s privacy when it touches a person’s socially stigmatizing feature like rash driving, tax evasion, induced abortion,
testing HIV positive, etc. Lanke (Int Stat Rev 44:197–203, 1976), Leysieffer and Warner (J Am Stat Assoc 71:649–656, 1976),
Anderson (Int Stat Rev 44:213–217, 1976, Scand J Stat 4:11–19, 1977) and Nayak (Commun Stat Theor Method 23:3303–3321, 1994)
among others have discussed how maintenance of efficiency is in conflict with protection of privacy. In their RR-related activities
the sample selection is traditionally by simple random sampling (SRS) with replacement (WR). In this paper, an extension of
an essential similarity in case of general unequal probability sample selection even without replacement is reported. Large
scale surveys overwhelmingly employ complex designs other than SRSWR. So extension of RR techniques to complex designs is
essential and hence this paper principally refers to them. New jeopardy measures to protect revelation of secrecy presented
here are needed as modifications of those in the literature covering SRSWR alone. Observing that multiple responses are feasible
in addressing such a dichotomous situation especially with Kuk’s (Biometrika 77:436–438, 1990) and Christofides’ (Metrika
57:195–200, 2003) RR devices, an average of the response-specific jeopardizing measures is proposed. This measure which is
device dependent, could be regarded as a technical characteristic of the device and it should be made known to the participants
before they agree to use the randomization device.
The views expressed are the authors’, not of the organizations they work for. Prof Chaudhuri’s research is partially supported
by CSIR Grant No. 21(0539)/02/EMR-II. 相似文献
3.
When one wants to check a tentatively proposed model for departures that are not well specified, looking at residuals is the
most common diagnostic technique. Here, we investigate the use of Bayesian standardized residuals to detect unknown hierarchical
structure. Asymptotic theory, also supported by simulations, shows that the use of Bayesian standardized residuals is effective
when the within group correlation, ρ, is large. However, we show that standardized residuals may not detect hierarchical structure when ρ is small. Thus, if it is important to detect modest hierarchical structure (i.e., ρ small) one should use other diagnostic techniques in addition to the standardized residuals. We use “quality of care” data
from the Patterns of Care Study, a two-stage cluster sample of patients undergoing radiation therapy for cervix cancer, to
illustrate the potential use of these residuals to detect missing hierarchical structure. 相似文献
4.
Software packages usually report the results of statistical tests using p-values. Users often interpret these values by comparing them with standard thresholds, for example, 0.1, 1, and 5%, which is sometimes reinforced by a star rating (***, **, and *, respectively). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, for example, by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals that cover [0, 1] and that can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations. 相似文献
5.
In this paper we present a fully model-based analysis of the effects of suppression and failure in data transmission with
sensor networks. Sensor networks are becoming an increasingly common data collection mechanism in a variety of fields. Sensors
can be created to collect data at very high temporal resolution. However, during periods when the process is following a stable
path, transmission of such high resolution data would carry little additional information with regard to the process model,
i.e., all of the data that is collected need not be transmitted. In particular, when there is cost to transmission, we find
ourselves moving to consideration of suppression in transmission. Additionally, for many sensor networks, in practice, we
will experience failures in transmission—messages sent by a sensor but not received at the gateway, messages sent but arriving
corrupted. Evidently, both suppression and failure lead to information loss which will be reflected in inference associated
with our process model. Our effort here is to assess the impact of such information loss under varying extents of suppression
and varying incidence of failure. We consider two illustrative process models, presenting fully model-based analyses of suppression
and failure using hierarchical models. Such models naturally facilitate borrowing strength across nodes, leveraging all available
data to learn about local process behavior. 相似文献
6.
Daniel Wartenberg 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2001,164(1):13-22
The investigation of disease clusters, aggregations of a few to several cases of disease, remains a controversial issue in epidemiology and public health. Reported at the rate of more than three per day nationally, a response commands substantial resources. This paper considers whether scientists or public health officials should investigate disease clusters, when they should, and, if so, how. Part of the disparity in opinions arises from differing goals: trying to identify new carcinogens versus identifying situations in which people are at risk. There are also differences in investigation strategies: passive versus active. This paper suggests that a more active surveillance programme with occasional investigation may best meet the needs of the public while accommodating the limited resources of public health officials and some of the concerns of epidemiologists. 相似文献
7.
本文研究了国内和国际上政府卫生支出测算口径和方法体系的异同,指出在进行政府卫生支出的国际比较时,需注意国内外测算口径存在的差异。OECD口径下政府卫生支出水平的国际比较表明,中国政府卫生支出总体水平较低,医疗卫生的公共筹资力度不足。必须加大政府卫生投入,建立和完善健康保障制度,强化政府在卫生筹资中的主导作用。 相似文献
8.
Martyn Andrews Obbey Elamin Kostas Kyriakoulis Matthew Sutton 《Econometric Reviews》2017,36(1-3):23-41
In his 1999 article with Breusch, Qian, and Wyhowski in the Journal of Econometrics, Peter Schmidt introduced the concept of “redundant” moment conditions. Such conditions arise when estimation is based on moment conditions that are valid and can be divided into two subsets: one that identifies the parameters and another that provides no further information. Their framework highlights an important concept in the moment-based estimation literature, namely, that not all valid moment conditions need be informative about the parameters of interest. In this article, we demonstrate the empirical relevance of the concept in the context of the impact of government health expenditure on health outcomes in England. Using a simulation study calibrated to this data, we perform a comparative study of the finite performance of inference procedures based on the Generalized Method of Moment (GMM) and info-metric (IM) estimators. The results indicate that the properties of GMM procedures deteriorate as the number of redundant moment conditions increases; in contrast, the IM methods provide reliable point estimators, but the performance of associated inference techniques based on first order asymptotic theory, such as confidence intervals and overidentifying restriction tests, deteriorates as the number of redundant moment conditions increases. However, for IM methods, it is shown that bootstrap procedures can provide reliable inferences; we illustrate such methods when analysing the impact of government health expenditure on health outcomes in England. 相似文献
9.
The increased emphasis on evidence-based medicine creates a greater need for educating future physicians in the general domain of quantitative reasoning, probability, and statistics. Reflecting this trend, more medical schools now require applicants to have taken an undergraduate course in introductory statistics. Given the breadth of statistical applications, we should cover in that course certain essential topics that may not be covered in the more general introductory statistics course. In selecting and presenting such topics, we should bear in mind that doctors also need to communicate probabilistic concepts of risks and benefits to patients who are increasingly expected to be active participants in their own health care choices despite having no training in medicine or statistics. It is also important that interesting and relevant examples accompany the presentation, because the examples (rather than the details) are what students tend to retain years later. Here, we present a list of topics we cover in the introductory biostatistics course that may not be covered in the general introductory course. We also provide some of our favorite examples for discussing these topics. 相似文献
10.
In this paper, we study the MDPDE (minimizing a density power divergence estimator), proposed by Basu et al. (Biometrika 85:549–559,
1998), for mixing distributions whose component densities are members of some known parametric family. As with the ordinary
MDPDE, we also consider a penalized version of the estimator, and show that they are consistent in the sense of weak convergence.
A simulation result is provided to illustrate the robustness. Finally, we apply the penalized method to analyzing the red
blood cell SLC data presented in Roeder (J Am Stat Assoc 89:487–495, 1994).
This research was supported (in part) by KOSEF through Statistical Research Center for Complex Systems at Seoul National University. 相似文献
11.
12.
Marco Marozzi 《Statistical Papers》2012,53(1):61-72
A class of tests due to Shoemaker (Commun Stat Simul Comput 28: 189–205, 1999) for differences in scale which is valid for
a variety of both skewed and symmetric distributions when location is known or unknown is considered. The class is based on
the interquantile range and requires that the population variances are finite. In this paper, we firstly propose a permutation
version of it that does not require the condition of finite variances and is remarkably more powerful than the original one.
Secondly we solve the question of what quantile choose by proposing a combined interquantile test based on our permutation
version of Shoemaker tests. Shoemaker showed that the more extreme interquantile range tests are more powerful than the less
extreme ones, unless the underlying distributions are very highly skewed. Since in practice you may not know if the underlying
distributions are very highly skewed or not, the question arises. The combined interquantile test solves this question, is
robust and more powerful than the stand alone tests. Thirdly we conducted a much more detailed simulation study than that
of Shoemaker (1999) that compared his tests to the F and the squared rank tests showing that his tests are better. Since the F and the squared rank test are not good for differences in scale, his results suffer of such a drawback, and for this reason
instead of considering the squared rank test we consider, following the suggestions of several authors, tests due to Brown–Forsythe
(J Am Stat Assoc 69:364–367, 1974), Pan (J Stat Comput Simul 63:59–71, 1999), O’Brien (J Am Stat Assoc 74:877–880, 1979) and
Conover et al. (Technometrics 23:351–361, 1981). 相似文献
13.
K.Aiyappan Nair 《Journal of statistical planning and inference》1982,6(2):119-122
In this paper we consider the estimation of the common mean of two normal populations when the variances are unknown. If it is known that one specified variance is smaller than the other, then it is possible to modify the Graybill-Deal estimator in order to obtain a more efficient estimator. One such estimator is proposed by Mehta and Gurland (1969). We prove that this estimator is more efficient than the Graybill-Deal estimator under the condition that one variance is known to be less than the other. 相似文献
14.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly
after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that
such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s
response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which
is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing
data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric
theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable
when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which
may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators
that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary
covariates. 相似文献
15.
Shola Adeyemi Thierry Chaussalet Eren Demir 《Statistical Methods and Applications》2011,20(4):507-518
Modelling patient flow in health care systems is considered to be vital in understanding the operational and clinical functions
of the system and may therefore prove to be useful in improving the functionality of the health care system, and most importantly
provide an evidence based approach for decision making, particularly in the management of the system. In this paper, we introduce
a nonproportional cumulative odds random effects model for patient pathways by violating the proportional assumption of the
cumulative odds model. Using the probability integral transform, we have extended this to cases where the random effects are
not normal, specifically gamma and exponentially distributions. Some of the advantages of this is that these models depict
changes in wellbeing (frailties) of patients as they move from one stage of care to the other in time. This is an hybrid extension
of our earlier work by jointly including pathways and covariates to explain probability of transition and discharge, which
could easily be used to predict the outcome of the treatment. The models here show that the inclusion of pathways render patients
characteristics as insignificant. Thus, pathways provide a source of useful information about transition and discharge than
patient characteristics, especially when the model is applied to a London University Neonatal Unit dataset. Bootstrapping
was then used to investigate the stability, consistency and generalizability of estimated parameters from the models. 相似文献
16.
Cindy Xin Feng 《Journal of applied statistics》2015,42(6):1206-1222
In disease mapping, health outcomes measured at the same spatial locations may be correlated, so one can consider joint modeling the multivariate health outcomes accounting for their dependence. The general approaches often used for joint modeling include shared component models and multivariate models. An alternative way to model the association between two health outcomes, when one outcome can naturally serve as a covariate of the other, is to use ecological regression model. For example, in our application, preterm birth (PTB) can be treated as a predictor for low birth weight (LBW) and vice versa. Therefore, we proposed to blend the ideas from joint modeling and ecological regression methods to jointly model the relative risks for LBW and PTBs over the health districts in Saskatchewan, Canada, in 2000–2010. This approach is helpful when proxy of areal-level contextual factors can be derived based on the outcomes themselves when direct information on risk factors are not readily available. Our results indicate that the proposed approach improves the model fit when compared with the conventional joint modeling methods. Further, we showed that when no strong spatial autocorrelation is present, joint outcome modeling using only independent error terms can still provide a better model fit when compared with the separate modeling. 相似文献
17.
AbstractA convention in designing randomized clinical trials has been to choose sample sizes that yield specified statistical power when testing hypotheses about treatment response. Manski and Tetenov recently critiqued this convention and proposed enrollment of sufficiently many subjects to enable near-optimal treatment choices. This article develops a refined version of that analysis applicable to trials comparing aggressive treatment of patients with surveillance. The need for a refined analysis arises because the earlier work assumed that there is only a primary health outcome of interest, without secondary outcomes. An important aspect of choice between surveillance and aggressive treatment is that the latter may have side effects. One should then consider how the primary outcome and side effects jointly determine patient welfare. This requires new analysis of sample design. As a case study, we reconsider a trial comparing nodal observation and lymph node dissection when treating patients with cutaneous melanoma. Using a statistical power calculation, the investigators assigned 971 patients to dissection and 968 to observation. We conclude that assigning 244 patients to each option would yield findings that enable suitably near-optimal treatment choice. Thus, a much smaller sample size would have sufficed to inform clinical practice. 相似文献
18.
Statistical issues in the prospective monitoring of health outcomes across multiple units 总被引:2,自引:1,他引:1
Clare Marshall Nicky Best Alex Bottle Paul Aylin 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(3):541-559
Summary. Following several recent inquiries in the UK into medical malpractice and failures to deliver appropriate standards of health care, there is pressure to introduce formal monitoring of performance outcomes routinely throughout the National Health Service. Statistical process control (SPC) charts have been widely used to monitor medical outcomes in a variety of contexts and have been specifically advocated for use in clinical governance. However, previous applications of SPC charts in medical monitoring have focused on surveillance of a single process over time. We consider some of the methodological and practical aspects that surround the routine surveillance of health outcomes and, in particular, we focus on two important methodological issues that arise when attempting to extend SPC charts to monitor outcomes at more than one unit simultaneously (where a unit could be, for example, a surgeon, general practitioner or hospital): the need to acknowledge the inevitable between-unit variation in 'acceptable' performance outcomes due to the net effect of many small unmeasured sources of variation (e.g. unmeasured case mix and data errors) and the problem of multiple testing over units as well as time. We address the former by using quasi-likelihood estimates of overdispersion, and the latter by using recently developed methods based on estimation of false discovery rates. We present an application of our approach to annual monitoring 'all-cause' mortality data between 1995 and 2000 from 169 National Health Service hospital trusts in England and Wales. 相似文献
19.
Two-phase study designs can reduce cost and other practical burdens associated with large scale epidemiologic studies by limiting
ascertainment of expensive covariates to a smaller but informative sub-sample (phase-II) of the main study (phase-I). During
the analysis of such studies, however, subjects who are selected at phase-I but not at phase-II, remain informative as they
may have partial covariate information. A variety of semi-parametric methods now exist for incorporating such data from phase-I
subjects when the covariate information can be summarized into a finite number of strata. In this article, we consider extending
the pseudo-score approach proposed by Chatterjee et al. (J Am Stat Assoc 98:158–168, 2003) using a kernel smoothing approach
to incorporate information on continuous phase-I covariates. Practical issues and algorithms for implementing the methods
using existing software are discussed. A sandwich-type variance estimator based on the influence function representation of
the pseudo-score function is proposed. Finite sample performance of the methods are studies using simulated data. Advantage
of the proposed smoothing approach over alternative methods that use discretized phase-I covariate information is illustrated
using two-phase data simulated within the National Wilms Tumor Study (NWTS). 相似文献
20.
Redfern P 《Journal of official statistics》1986,2(4):415-424
"During the past twenty years Scandinavian countries have made changes in the methods of taking population and housing censuses that are more fundamental than any seen since modern census methods were first introduced two hundred years ago. These countries extract their census data in part or in whole from administrative registers. If other countries in Western Europe were to adopt this approach, most of them would have to make major improvements to their administrative records. But the primary reasons for making such improvements are concerned with administration and policy rather than statistics, namely, the need to secure a more effective and fairer system of public administration and to enable governments to exercise a wider range of policy options." 相似文献