首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Non-parametric group sequential designs in randomized clinical trials   总被引:1,自引:0,他引:1  
This paper examines some non‐parametric group sequential designs applicable for randomized clinical trials, for comparing two continuous treatment effects taking the observations in matched pairs, or applicable in event‐based analysis. Two inverse binomial sampling schemes are considered, of which the second one is an adaptive data‐dependent design. These designs are compared with some fixed sample size competitors. Power and expected sample sizes are calculated for the proposed procedures.  相似文献   

2.
Item non‐response in surveys occurs when some, but not all, variables are missing. Unadjusted estimators tend to exhibit some bias, called the non‐response bias, if the respondents differ from the non‐respondents with respect to the study variables. In this paper, we focus on item non‐response, which is usually treated by some form of single imputation. We examine the properties of doubly robust imputation procedures, which are those that lead to an estimator that remains consistent if either the outcome variable or the non‐response mechanism is adequately modelled. We establish the double robustness property of the imputed estimator of the finite population distribution function under random hot‐deck imputation within classes. We also discuss the links between our approach and that of Chambers and Dunstan. The results of a simulation study support our findings.  相似文献   

3.
Abstract. The short‐term and long‐term hazard ratio model includes the proportional hazards model and the proportional odds model as submodels, and allows a wider range of hazard ratio patterns compared with some of the more traditional models. We propose two omnibus tests for checking this model, based, respectively, on the martingale residuals and the contrast between the non‐parametric and model‐based estimators of the survival function. These tests are shown to be consistent against any departure from the model. The empirical behaviours of the tests are studied in simulations, and the tests are illustrated with some real data examples.  相似文献   

4.
Case‐cohort design has been demonstrated to be an economical and efficient approach in large cohort studies when the measurement of some covariates on all individuals is expensive. Various methods have been proposed for case‐cohort data when the dimension of covariates is smaller than sample size. However, limited work has been done for high‐dimensional case‐cohort data which are frequently collected in large epidemiological studies. In this paper, we propose a variable screening method for ultrahigh‐dimensional case‐cohort data under the framework of proportional model, which allows the covariate dimension increases with sample size at exponential rate. Our procedure enjoys the sure screening property and the ranking consistency under some mild regularity conditions. We further extend this method to an iterative version to handle the scenarios where some covariates are jointly important but are marginally unrelated or weakly correlated to the response. The finite sample performance of the proposed procedure is evaluated via both simulation studies and an application to a real data from the breast cancer study.  相似文献   

5.
This paper studies the asymptotic behaviour of the false discovery and non‐discovery proportions of the dynamic adaptive procedure under some dependence structure. A Bahadur‐type representation of the cut point in simultaneously performing a large scale of tests is presented. The asymptotic bias decompositions of the false discovery and non‐discovery proportions are given under some dependence structure. In addition to existing literatures, we find that the randomness due to the dynamic selection of the tuning parameter in estimating the true null rate serves as a source of the approximation error in the Bahadur representation and enters into the asymptotic bias term of the false discovery proportion and those of the false non‐discovery proportion. The theory explains to some extent why some seemingly attractive dynamic adaptive procedures do not outperform the competing fixed adaptive procedures substantially in some situations. Simulations justify our theory and findings.  相似文献   

6.
This paper introduces continuous‐time random processes whose spectral density is unbounded at some non‐zero frequencies. The discretized versions of these processes have asymptotic properties similar to those of discrete‐time Gegenbauer processes. The paper presents some properties of the covariance function and spectral density as well as a theory of statistical estimation of the mean and covariance function of such processes. Some directions for further generalizations of the results are indicated.  相似文献   

7.
This paper deals with a longitudinal semi‐parametric regression model in a generalised linear model setup for repeated count data collected from a large number of independent individuals. To accommodate the longitudinal correlations, we consider a dynamic model for repeated counts which has decaying auto‐correlations as the time lag increases between the repeated responses. The semi‐parametric regression function involved in the model contains a specified regression function in some suitable time‐dependent covariates and a non‐parametric function in some other time‐dependent covariates. As far as the inference is concerned, because the non‐parametric function is of secondary interest, we estimate this function consistently using the independence assumption‐based well‐known quasi‐likelihood approach. Next, the proposed longitudinal correlation structure and the estimate of the non‐parametric function are used to develop a semi‐parametric generalised quasi‐likelihood approach for consistent and efficient estimation of the regression effects in the parametric regression function. The finite sample performance of the proposed estimation approach is examined through an intensive simulation study based on both large and small samples. Both balanced and unbalanced cluster sizes are incorporated in the simulation study. The asymptotic performances of the estimators are given. The estimation methodology is illustrated by reanalysing the well‐known health care utilisation data consisting of counts of yearly visits to a physician by 180 individuals for four years and several important primary and secondary covariates.  相似文献   

8.
WILCOXON-TYPE RANK-SUM PRECEDENCE TESTS   总被引:1,自引:0,他引:1  
This paper introduces Wilcoxon‐type rank‐sum precedence tests for testing the hypothesis that two life‐time distribution functions are equal. They extend the precedence life‐test first proposed by Nelson in 1963. The paper proposes three Wilcoxon‐type rank‐sum precedence test statistics—the minimal, maximal and expected rank‐sum statistics—and derives their null distributions. Critical values are presented for some combinations of sample sizes, and the exact power function is derived under the Lehmann alternative. The paper examines the power properties of the Wilcoxon‐type rank‐sum precedence tests under a location‐shift alternative through Monte Carlo simulations, and it compares the power of the precedence test, the maximal precedence test and Wilcoxon rank‐sum test (based on complete samples). Two examples are presented for illustration.  相似文献   

9.
Copulas are powerful explanatory tools for studying dependence patterns in multivariate data. While the primary use of copula models is in multivariate dependence modelling, they also offer predictive value for regression analysis. This article investigates the utility of copula models for model‐based predictions from two angles. We assess whether, where, and by how much various copula models differ in their predictions of a conditional mean and conditional quantiles. From a model selection perspective, we then evaluate the predictive discrepancy between copula models using in‐sample and out‐of‐sample predictions both in bivariate and higher‐dimensional settings. Our findings suggest that some copula models are more difficult to distinguish in terms of their overall predictive power than others, and depending on the quantity of interest, the differences in predictions can be detected only in some targeted regions. The situations where copula‐based regression approaches would be advantageous over traditional ones are discussed using simulated and real data. The Canadian Journal of Statistics 47: 8–26; 2019 © 2018 Statistical Society of Canada  相似文献   

10.
We present some lower bounds for the probability of zero for the class of count distributions having a log‐convex probability generating function, which includes compound and mixed‐Poisson distributions. These lower bounds allow the construction of new non‐parametric estimators of the number of unobserved zeros, which are useful for capture‐recapture models, or in areas like epidemiology and literary style analysis. Some of these bounds also lead to the well‐known Chao's and Turing's estimators. Several examples of application are analysed and discussed.  相似文献   

11.
Motivated by the need to analyze the National Longitudinal Surveys data, we propose a new semiparametric longitudinal mean‐covariance model in which the effects on dependent variable of some explanatory variables are linear and others are non‐linear, while the within‐subject correlations are modelled by a non‐stationary autoregressive error structure. We develop an estimation machinery based on least squares technique by approximating non‐parametric functions via B‐spline expansions and establish the asymptotic normality of parametric estimators as well as the rate of convergence for the non‐parametric estimators. We further advocate a new model selection strategy in the varying‐coefficient model framework, for distinguishing whether a component is significant and subsequently whether it is linear or non‐linear. Besides, the proposed method can also be employed for identifying the true order of lagged terms consistently. Monte Carlo studies are conducted to examine the finite sample performance of our approach, and an application of real data is also illustrated.  相似文献   

12.
There is now general agreement that pre‐testing for carry‐over in the AB/BA design is harmful and that efficient analysis of this design must proceed on the assumption that carry‐over has not affected the results to any appreciable degree. A general consensus has not been achieved in the case of higher‐order designs. Since particular forms of carry‐over can be estimated on a within‐patient basis and unbiased within‐patient treatment estimators are possible, some statisticians favour pre‐testing and some favour automatic adjustment for carry‐over. We present theoretical arguments that show that, just as in the AB/BA case, the strategy of pre‐testing is biased as a whole and also that the loss in terms of efficiency in adjusting is not negligible. We also present data from two large series of bioequivalence studies to provide empirical evidence that in this context carry‐over is either absent or rare. We conclude that adjusting or testing for carry‐over in bioequivalence studies is at worst harmful and at best pointless, and that this may also apply to other kinds of study. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
Estimating the effect of medical treatments on subject responses is one of the crucial problems in medical research. Matched‐pairs designs are commonly implemented in the field of medical research to eliminate confounding and improve efficiency. In this article, new estimators of treatment effects for heterogeneous matched‐pairs data are proposed. Asymptotic properties of the proposed estimators are derived. Simulation studies show that the proposed estimators have some advantages over the famous Heckman's estimator, the conditional maximum likelihood estimator, and the inverse probability weighted estimator. We apply the proposed methodology to a data set from a study of low‐birth‐weight infants.  相似文献   

14.
The classical chi‐square test of goodness of fit compares the hypothesis that data arise from some parametric family of distributions, against the nonparametric alternative that they arise from some other distribution. However, the chi‐square test requires continuous data to be grouped into arbitrary categories. Furthermore, as the test is based upon an approximation, it can only be used if there are sufficient data. In practice, these requirements are often wasteful of information and overly restrictive. The authors explore the use of the fractional Bayes factor to obtain a Bayesian alternative to the chi‐square test when no specific prior information is available. They consider the extent to which their methodology can handle small data sets and continuous data without arbitrary grouping.  相似文献   

15.
Abstract. Many epidemiological studies have been conducted to identify an association between nutrient consumption and chronic disease risk. To this problem, Cox regression with additive covariate measurement error has been well developed in the literature. However, researchers are concerned with the validity of the additive measurement error assumption for self‐report nutrient data. Recently, some study designs using more reliable biomarker data have been considered, in which the additive measurement error assumption is more likely to hold. Biomarker data are often available in a subcohort. Self‐report data often encounter with a variety of serious biases. Complications arise primarily because the magnitude of measurement errors is often associated with some characteristics of a study subject. A more general measurement error model has been developed for self‐report data. In this paper, a non‐parametric maximum likelihood (NPML) estimator using an EM algorithm is proposed to simultaneously adjust for the general measurement errors.  相似文献   

16.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

17.
In a two‐level factorial experiment, the authors consider designs with partial duplication which permit estimation of the constant term, all main effects and some specified two‐factor interactions, assuming that the other effects are negligible. They construct parallel‐flats designs with two identical parallel flats that meet prior specifications; they also consider classes of 3‐flat and 4‐flat designs. They show that the designs obtained can have a very simple covariance structure and high D‐efficiency. They give an algorithm from which they generate a series of practical designs with run sizes 12, 16, 24, and 32.  相似文献   

18.
The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non‐negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non‐negative data. We explore exponential smoothing state space models for non‐negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non‐Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short‐term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one‐step‐ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed.  相似文献   

19.
Abstract. In this paper, we provide a definition of pattern of outliers in contingency tables within a model‐based framework. In particular, we make use of log‐linear models and exact goodness‐of‐fit tests to specify the notions of outlier and pattern of outliers. The language and some techniques from Algebraic Statistics are essential tools to make the definition clear and easily applicable. We also analyse several numerical examples to show how to use our definitions.  相似文献   

20.
Negative binomial (NB) regression is the most common full‐likelihood method for analysing count data exhibiting overdispersion with respect to the Poisson distribution. Usually most practitioners are content to fit one of two NB variants, however other important variants exist. It is demonstrated here that the VGAM R package can fit them all under a common statistical framework founded upon a generalised linear and additive model approach. Additionally, other modifications such as zero‐altered (hurdle), zero‐truncated and zero‐inflated NB distributions are naturally handled. Rootograms are also available for graphically checking the goodness of fit. Two data sets and some recently added features of the VGAM package are used here for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号