共查询到20条相似文献,搜索用时 15 毫秒
1.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions. 相似文献
2.
Abstract. In many epidemiological studies, disease occurrences and their rates are naturally modelled by counting processes and their intensities, allowing an analysis based on martingale methods. Applied to the Mantel–Haenszel estimator, these methods lend themselves to the analysis of general control selection sampling designs and the accommodation of time-varying exposures. 相似文献
3.
Cyril Favre‐Martinoz David Haziza Jean‐François Beaumont 《Scandinavian Journal of Statistics》2016,43(4):1019-1034
Influential units occur frequently in surveys, especially in business surveys that collect economic variables whose distributions are highly skewed. A unit is said to be influential when its inclusion or exclusion from the sample has an important impact on the sampling error of estimates. We extend the concept of conditional bias attached to a unit and propose a robust version of the double expansion estimator, which depends on a tuning constant. We determine the tuning constant that minimizes the maximum estimated conditional bias. Our results can be naturally extended to the case of unit nonresponse, the set of respondents often being viewed as a second‐phase sample. A robust version of calibration estimators, based on auxiliary information available at both phases, is also constructed. 相似文献
4.
The spectral analysis of Gaussian linear time-series processes is usually based on uni-frequential tools because the spectral density functions of degree 2 and higher are identically zero and there is no polyspectrum in this case. In finite samples, such an approach does not allow the resolution of closely adjacent spectral lines, except by using autoregressive models of excessively high order in the method of maximum entropy. In this article, multi-frequential periodograms designed for the analysis of discrete and mixed spectra are defined and studied for their properties in finite samples. For a given vector of frequencies ω, the sum of squares of the corresponding trigonometric regression model fitted to a time series by unweighted least squares defines the multi-frequential periodogram statistic IM(ω). When ω is unknown, it follows from the properties of nonlinear models whose parameters separate (i.e., the frequencies and the cosine and sine coefficients here) that the least-squares estimator of frequencies is obtained by maximizing I M(ω). The first-order, second-order and distribution properties of I M(ω) are established theoretically in finite samples, and are compared with those of Schuster's uni-frequential periodogram statistic. In the multi-frequential periodogram analysis, the least-squares estimator of frequencies is proved to be theoretically unbiased in finite samples if the number of periodic components of the time series is correctly estimated. Here, this number is estimated at the end of a stepwise procedure based on pseudo-Flikelihood ratio tests. Simulations are used to compare the stepwise procedure involving I M(ω) with a stepwise procedure using Schuster's periodogram, to study an approximation of the asymptotic theory for the frequency estimators in finite samples in relation to the proximity and signal-to-noise ratio of the periodic components, and to assess the robustness of I M(ω) against autocorrelation in the analysis of mixed spectra. Overall, the results show an improvement of the new method over the classical approach when spectral lines are adjacent. Finally, three examples with real data illustrate specific aspects of the method, and extensions (i.e., unequally spaced observations, trend modeling, replicated time series, periodogram matrices) are outlined. 相似文献
5.
This paper deals with estimation of population median in simple and stratified random samplings by using auxiliary information. Auxiliary information is rarely used in estimating population median, although there have been many studies to estimate population mean using auxiliary information. In this study, we suggest some estimators using auxiliary information such as mode and range of an auxiliary variable and correlation coefficient. We also expand these estimators to stratified random sampling for combined and separate estimators. We obtain mean square error equations for all proposed estimators and find theoretical conditions. These conditions are also supported by using numerical examples. 相似文献
6.
We propose separate ratio estimators for population variance in stratified random sampling. We obtain mean square error equations and compare proposed estimators about efficiency with each other. By these comparisons, we find the conditions which make proposed estimators more efficient than others. It has been shown that proposed classes of estimators are more efficient than usual unbiased estimator. We find that separate ratio estimators are more efficient than combined ratio estimators for population variance. The theoretical results are supported by a numerical illustration with original data. A simulation study is also carried out to investigate empirical performance of estimators. 相似文献
7.
In this paper, we consider the analysis of recurrent event data that examines the differences between two treatments. The outcomes that are considered in the analysis are the pre-randomisation event count and post-randomisation times to first and second events with associated cure fractions. We develop methods that allow pre-randomisation counts and two post-randomisation survival times to be jointly modelled under a Poisson process framework, assuming that outcomes are predicted by (unobserved) event rates. We apply these methods to data that examine the difference between immediate and deferred treatment policies in patients presenting with single seizures or early epilepsy. We find evidence to suggest that post-randomisation seizure rates change at randomisation and following a first seizure after randomisation. We also find that there are cure rates associated with the post-randomisation times to first and second seizures. The increase in power over standard survival techniques, offered by the joint models that we propose, resulted in more precise estimates of the treatment effect and the ability to detect interactions with covariate effects. 相似文献
8.
Yosihiko Ogata Koichi Katsura Niels Keiding Claus Holst & Anders Green 《Scandinavian Journal of Statistics》2000,27(3):415-432
We analyse the (age, time)-specific incidence of diabetes based on retrospective data obtained from a prevalent cohort only including survivors to a particular date. From underlying point processes with intensities corresponding to the (age, time)-specific incidence rates the observed point pattern is assumed to be generated by an independent thinning process with parameters (assumed known) depending on population density and survival probability to the sampling date. A Bayesian procedure is carried out for the optimal adjustment and comparison of isotropic and anisotropic smoothing priors for the intensity functions, as well as for the decomposition of the intensity on the (time, age) Lexis diagram into the three factors of age, period and cohort. 相似文献
9.
BRYAN LANGHOLZ 《Scandinavian Journal of Statistics》2007,34(1):120-136
Abstract. Four case studies are presented to illustrate how information available on cohort members can be used to inform the control selection in epidemiologic case-control studies. The basic framework is the nested case-control paradigm and accompanying analysis methods. Emphasis is on development of intuition for choosing study design candidates, the form of the estimators, and extensions of the basic theory to solve design and analysis problems. 相似文献
10.
Renato Assunçäo Andréa Tavares Thais Correa Martin Kulldorff 《Revue canadienne de statistique》2007,35(1):9-25
The authors propose a new type of scan statistic to test for the presence of space‐time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. Their method is based only on point patterns for cases; it does not require any specific knowledge of the underlying population. The authors propose to scan the three‐dimensional space with a score test statistic under the null hypothesis that the underlying point process is an inhomogeneous Poisson point process with space and time separable intensity. The alternative is that there are one or more localized space‐time clusters. Their method has been implemented in a computationally efficient way so that it can be applied routinely. They illustrate their method with space‐time crime data from Belo Horizonte, a Brazilian city, in addition to presenting a Monte Carlo study to analyze the power of their new test. 相似文献
11.
Efthymios G. Tsionas 《统计学通讯:理论与方法》2013,42(10):1689-1706
This article considers explicit and detailed theoretical and empirical Bayesian analysis of the well-known Poisson regression model for count data with unobserved individual effects based on the lognormal, rather than the popular negative binomial distribution. Although the negative binomial distribution leads to analytical expressions for the likelihood function, a Poisson-lognormal model is closer to the concept of regression with normally distributed innovations, and accounts for excess zeros as well. Such models have been considered widely in the literature (Winkelmann, 2008). The article also provides the necessary theoretical results regarding the posterior distribution of the model. Given that the likelihood function involves integrals with respect to the latent variables, numerical methods organized around Gibbs sampling with data augmentation are proposed for likelihood analysis of the model. The methods are applied to the patent-R&D relationship of 70 US pharmaceutical and biomedical companies, and it is found that it performs better than Poisson regression or negative binomial regression models. 相似文献
12.
Statistical process monitoring (SPM) has been used extensively recently in order to assure the quality of the output of industrial processes. Techniques of SPM have been efficiently applied during the last two decades in non‐industrial processes. A field of application with great interest is public health monitoring, where a pitfall with which we have to deal is the fact that available samples are not random in all cases. In the majority of cases, we monitor measurements derived from patient admissions to a hospital against control limits that were calculated using a sample of data taken from an epidemiological survey. In this work, we bridge the gap of a change in the sampling scheme from Phase I to Phase II, studying the case where the sampling during Phase II is biased. We present the appropriate methodology and then apply extensive numerical simulation in order to explore the performance of the proposed methodology, for measurements following various asymmetrical distributions. As the simulations show, the proposed methodology has a significantly better performance than the standard procedure. 相似文献
13.
In this paper, an improved generalized difference-cum-ratio-type estimator for the finite population variance under two-phase sampling design is proposed. The expressions for bias and mean square error (MSE) are derived to first order of approximation. The proposed estimator is more efficient than the usual sample variance estimator, traditional ratio estimator, traditional regression estimator, chain ratio type and chain ratio-product-type estimators, and Jhajj and Walia (2011) estimator. Four datasets are also used to illustrate the performances of different estimators. 相似文献
14.
Brent A. Johnson 《Scandinavian Journal of Statistics》2017,44(2):545-562
Occasionally, investigators collect auxiliary marks at the time of failure in a clinical study. Because the failure event may be censored at the end of the follow‐up period, these marked endpoints are subject to induced censoring. We propose two new families of two‐sample tests for the null hypothesis of no difference in mark‐scale distribution that allows for arbitrary associations between mark and time. One family of proposed tests is a nonparametric extension of an existing semi‐parametric linear test of the same null hypothesis while a second family of tests is based on novel marked rank processes. Simulation studies indicate that the proposed tests have the desired size and possess adequate statistical power to reject the null hypothesis under a simple change of location in the marginal mark distribution. When the marginal mark distribution has heavy tails, the proposed rank‐based tests can be nearly twice as powerful as linear tests. 相似文献
15.
Devin Incerti Michael T. Bretscher Ray Lin Chris Harbron 《Pharmaceutical statistics》2023,22(1):162-180
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta . 相似文献
16.
In this paper we consider the problem of unbiased estimation of the distribution function of an exponential population using order statistics based on a random sample. We present a (unique) unbiased estimator based on a single, say ith, order statistic and study some properties of the estimator for i = 2. We also indicate how this estimator can be utilized to obtain unbiased estimators when a few selected order statistics are available as well as when the sample is selected following an alternative sampling procedure known as ranked set sampling. It is further proved that for a ranked set sample of size two, the proposed estimator is uniformly better than the conventional nonparametric unbiased estimator, further, for a general sample size, a modified ranked set sampling procedure provides an unbiased estimator uniformly better than the conventional nonparametric unbiased estimator based on the usual ranked set sampling procedure. 相似文献
17.
Scott R. Walter Bruce M. Brown William T.M. Dunsmuir 《Australian & New Zealand Journal of Statistics》2020,62(2):133-152
Clinical work is characterised by frequent interjection of external prompts causing clinicians to switch from a primary task to deal with an incoming secondary task, a phenomenon associated with negative effects in experimental studies. This is an important yet underexplored aspect of work in safety critical settings in general, since an increase in task length due to task‐switching implies reduced efficiency, while decreased length suggests hastening to compensate for the increased workload brought by the unexpected secondary tasks, which is a potential safety issue. In such observational settings, longer tasks are naturally more likely to have one or more task‐switching events: a form of length bias. To assess the effect of task‐switching on task completion time, it is necessary to estimate counterfactual task lengths had they not experienced any task‐switching, while also accounting for length bias. This is a problem that appears simple at first, but has several counterintuitive considerations resulting in a uniquely constrained solution space. We review the only existing method based on an assumption that task‐switches occur according to a homogeneous Poisson process. We propose significant extensions to flexibly incorporate heterogeneity that is more representative of task‐switching in real‐world contexts. The techniques are applied to observations of emergency physicians’ workflow in two hospital settings. 相似文献
18.
Survival studies often collect information about covariates. If these covariates are believed to contain information about the life-times, they may be considered when estimating the underlying life-time distribution. We propose a non-parametric estimator which uses the recorded information about the covariates. Various forms of incomplete data, e.g. right-censored data, are allowed. The estimator is the conditional mean of the true empirical survival function given the observed history, and it is derived using a general filtering formula. Feng & Kurtz (1994) showed that the estimator is the Kaplan–Meier estimator in the case of right-censoring when using the observed life-times and censoring-times as the observed history. We take the same approach as Feng & Kurtz (1994) but in addition we incorporate the recorded information about the covariates in the observed history. Two models are considered and in both cases the Kaplan–Meier estimator is a special case of the estimator. In a simulation study the estimator is compared with the Kaplan–Meier estimator in small samples. 相似文献
19.
This article addresses the problem of estimating the population mean in stratified random sampling using the information of an auxiliary variable. A class of estimators for population mean is defined with its properties under large sample approximation. In particular, various classes of estimators are identified as particular member of the suggested class. It has been shown that the proposed class of estimators is better than usual unbiased estimator, usual combined ratio estimator, usual product estimator, usual regression estimator and Koyuncu and Kadilar (2009) class of estimators. The results have been illustrated through an empirical study. 相似文献
20.
This article considers the problem of estimating the population mean using information on two auxiliary variables in the presence of non response under two-phase sampling. Some improved ratio-in-regression type estimators have been proposed in four different situations of non response along with their properties under large sample approximation. Efficiency comparisons of the proposed estimators with the usual unbiased estimator by Hansen and Hurwitz (1946), conventional ratio and regression estimators using single auxiliary variable and Singh and Kumar (2010b) estimators using two auxiliary variables have been made. Finally, these theoretical findings are illustrated by a numerical example. 相似文献