首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the estimation of a two dimensional continuous–discrete density function. A new methodology based on wavelets is proposed. We construct a linear wavelet estimator and a non-linear wavelet estimator based on a term-by-term thresholding. Their rates of convergence are established under the mean integrated squared error over Besov balls. In particular, we prove that our adaptive wavelet estimator attains a fast rate of convergence. A simulation study illustrates the usefulness of the proposed estimators.  相似文献   

2.
Classical time-series theory assumes values of the response variable to be ‘crisp’ or ‘precise’, which is quite often violated in reality. However, forecasting of such data can be carried out through fuzzy time-series analysis. This article presents an improved method of forecasting based on LR fuzzy sets as membership functions. As an illustration, the methodology is employed for forecasting India's total foodgrain production. For the data under consideration, superiority of proposed method over other competing methods is demonstrated in respect of modelling and forecasting on the basis of mean square error and average relative error criteria. Finally, out-of-sample forecasts are also obtained.  相似文献   

3.
We develop a sample size methodology that achieves specified Type-1 and Type-2 error rates when comparing the survivor functions of multiple treatment groups versus a control group. The designs will control family-wise Type-1 error rate. We assume the family of Weibull distributions adequately describes the underlying survivor functions, and we separately consider three of the most common study scenarios: (a) complete samples; (b) Type-1 censoring with a common censoring time; and (c) Type-1 censoring with an accrual period. A mice longevity study comparing the effect on survival of multiple low-calorie diets is used to motivate our work on this problem.  相似文献   

4.
Change point monitoring for distributional changes in time-series models is an important issue. In this article, we propose two monitoring procedures to detect distributional changes of squared residuals in GARCH models. The asymptotic properties of our monitoring statistics are derived under both the null of no change in distribution and the alternative of a change in distribution. The finite sample properties are investigated by a simulation.  相似文献   

5.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

6.
Forecasting the turning points in business cycles is important to economic and political decisions. Time series of business indicators often exhibit cycles that cannot easily be modelled with a parametric function. This article presents a method for monitoring time-series with cycles in order to detect the turning points. A non-parametric estimation procedure that uses only monotonicity restrictions is used. The methodology of statistical surveillance is used for developing a system for early warnings of cycle turning points in monthly data. In monitoring, the inference situation is one of repeated decisions. Measurements of the performance of a method of surveillance are, for example, average run length and expected delay to a correct alarm. The properties of the proposed monitoring system are evaluated by means of a simulation study. The false alarms are controlled by a fixed median run length to the first false alarm. Results are given on the median delay time to a correct alarm for two situations: a peak after two and three years respectively .  相似文献   

7.
"The limitations of available migration data preclude a time-series approach of modeling interstate migration [in the United States]. The method presented here combines aspects of the demographic and economic approaches to forecasting migration in a manner compatible with existing data. Migration rates are modeled to change in response to changes in economic conditions. When applied to resently constructed data on migration based on income tax returns and then compared to standard demographic projections, the demographic-economic approach has a 20% lower total error in forecasting net migration by state for cohorts of labor-force age."  相似文献   

8.
In this article, we propose a class of partial deconvolution kernel estimators for the nonparametric regression function when some covariates are measured with error and some are not. The estimation procedure combines the classical kernel methodology and the deconvolution kernel technique. According to whether the measurement error is ordinarily smooth or supersmooth, we establish the optimal local and global convergence rates for these proposed estimators, and the optimal bandwidths are also identified. Furthermore, lower bounds for the convergence rates of all possible estimators for the nonparametric regression functions are developed. It is shown that, in both the super and ordinarily smooth cases, the convergence rates of the proposed partial deconvolution kernel estimators attain the lower bound. The Canadian Journal of Statistics 48: 535–560; 2020 © 2020 Statistical Society of Canada  相似文献   

9.
Many recent multiple testing papers have provided more efficient and/or robust methodology for control of a particular error rate. However, different multiple testing scenarios call for the control of different error rates. Hence, the procedure possessing the desired optimality and/or robustness properties may not be applicable to the problem at hand. This paper provides a general method for extending any multiple testing procedure to control any error rate, thereby allowing for the procedure possessing the desired properties to be used to control the most relevant error rate. As an example, two popular procedures that were originally designed to control the marginal and positive False Discovery Rate are extended to control the False Discovery Rate and Family-wise Error Rate. It is shown that optimality and/or robustness properties of the original procedure are retained when it is modified using the proposed method.  相似文献   

10.
Stochastic curtailment has been considered for the interim monitoring of group sequential trials (Davis and Hardy, 1994). Statistical boundaries in Davis and Hardy (1994) were derived using theory of Brownian motion. In some clinical trials, the conditions of forming a Brownian motion may not be satisfied. In this paper, we extend the computations of Brownian motion based boundaries, expected stopping times, and type I and type II error rates to fractional Brownian motion (FBM). FBM includes Brownian motion as a special case. Designs under FBM are compared to those under Brownian motion and to those of O’Brien–Fleming type tests. One- and two-sided boundaries for efficacy and futility monitoring are also discussed. Results show that boundary values decrease and error rates deviate from design levels when the Hurst parameter increases from 0.1 to 0.9, these changes should be considered when designing a study under FBM.  相似文献   

11.
Biomarkers play a key role in the monitoring of disease progression. The time taken for an individual to reach a biomarker exceeding or lower than a meaningful threshold is often of interest. Due to the inherent variability of biomarkers, persistence criteria are sometimes included in the definitions of progression, such that only two consecutive measurements above or below the relevant threshold signal that “true” progression has occurred. In previous work, a novel approach was developed, which allowed estimation of the time to threshold using the parameters from a linear mixed model where the residual variance was assumed to be pure measurement error. In this paper, we extend this methodology so that serial correlation can be accommodated. Assuming that the Markov property holds and applying the chain rule of probabilities, we found that the probability of progression at each timepoint can be expressed simply as the product of conditional probabilities. The methodology is applied to a cohort of HIV positive individuals, where the time to reach a CD4 count threshold is estimated. The second application we present is based on a study on abdominal aortic aneurysms, where the time taken for an individual to reach a diameter exceeding 55 mm is studied. We observed that erroneously ignoring the residual correlation when it is strong may result in substantial overestimation of the time to threshold. The estimated probability of the biomarker reaching a threshold of interest, expected time to threshold, and confidence intervals are presented for selected patients in both applications.  相似文献   

12.
Gottman's version of the Mann and Wald asymptotic test for intervention effects in time-series data is presented as a useful small sample procedure. A Monte Carlo simulaltion is conducted to evaluate the procedure for controlling Type I errors with varying values of autoregressive coefficients. Results indicate the procedure works better than Gottman's work originally indicated. However, in some cases error rates can be unacceptably high. Procedures for evaluating changes in level in the presence of autocorrelation and slope are suggested and evaluated.  相似文献   

13.
Many authors have shown that a combined analysis of data from two or more types of recapture survey brings advantages, such as the ability to provide more information about parameters of interest. For example, a combined analysis of annual resighting and monthly radio-telemetry data allows separate estimates of true survival and emigration rates, whereas only apparent survival can be estimated from the resighting data alone. For studies involving more than one type of survey, biologists should consider how to allocate the total budget to the surveys related to the different types of marks so that they will gain optimal information from the surveys. For example, since radio tags and subsequent monitoring are very costly, while leg bands are cheap, the biologists should try to balance costs with information obtained in deciding how many animals should receive radios. Given a total budget and specific costs, it is possible to determine the allocation of sample sizes to different types of marks in order to minimize the variance of parameters of interest, such as annual survival and emigration rates. In this paper, we propose a cost function for a study where all birds receive leg bands and a subset receives radio tags and all new releases occur at the start of the study. Using this cost function, we obtain the allocation of sample sizes to the two survey types that minimizes the standard error of survival rate estimates or, alternatively, the standard error of emigration rates. Given the proposed costs, we show that for high resighting probability, e.g. 0.6, tagging roughly 10-40% of birds with radios will give survival estimates with standard errors within the minimum range. Lower resighting rates will require a higher percentage of radioed birds. In addition, the proposed costs require tagging the maximum possible percentage of radioed birds to minimize the standard error of emigration estimates.  相似文献   

14.
Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p -value rejection methods based on the observed data. Whereas a sequential p -value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q -value, the pFDR analogue of the p -value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini–Hochberg FDR method.  相似文献   

15.
现代金融经济学中连续时间模型能够更方便地描述重要经济变量的动态过程如股价、汇率和利率等。为连续时间模型提出了一种高频数据驱动的二阶段估计方法,增强了连续时间扩展模型的弹性和可操作性。以Vasicek模型为例给出了该方法的应用实例,首先在第一阶段使用实现波动率方法估计出模型的扩散项参数,然后使用实际数据的稳态分布的前向方程估计漂移项参数。此方法对模型初始设定和优化算法依赖程度低,结果较为稳定可靠。  相似文献   

16.
The empirical best linear unbiased prediction approach is a popular method for the estimation of small area parameters. However, the estimation of reliable mean squared prediction error (MSPE) of the estimated best linear unbiased predictors (EBLUP) is a complicated process. In this paper we study the use of resampling methods for MSPE estimation of the EBLUP. A cross-sectional and time-series stationary small area model is used to provide estimates in small areas. Under this model, a parametric bootstrap procedure and a weighted jackknife method are introduced. A Monte Carlo simulation study is conducted in order to compare the performance of different resampling-based measures of uncertainty of the EBLUP with the analytical approximation. Our empirical results show that the proposed resampling-based approaches performed better than the analytical approximation in several situations, although in some cases they tend to underestimate the true MSPE of the EBLUP in a higher number of small areas.  相似文献   

17.
We propose a simple two-stage monitoring rule for detecting small disorders in a two-sample location problem. The proposed rule is based on ranks and hence is nonparametric in nature. In the first stage, we use a sequential monitoring scheme to decide the necessity of employing a location test at some point of time. If there is urgency, we simply use a two-sample Wilcoxon rank sum test in the second stage. This leads to a semi sequential one-shot monitoring procedure. We study some asymptotic performance of the proposed rule. We also present some numerical findings obtained through Monte Carlo studies. The proposed rule meets the challenge of controlling type I error rate in sequential monitoring of an incoming series of observations.  相似文献   

18.
Leave-one-out and 632 bootstrap are popular data-based methods of estimating the true error rate of a classification rule, but practical applications almost exclusively quote only point estimates. Interval estimation would provide better assessment of the future performance of the rule, but little has been published on this topic. We first review general-purpose jackknife and bootstrap methodology that can be used in conjunction with leave-one-out estimates to provide prediction intervals for true error rates of classification rules. Monte Carlo simulation is then used to investigate coverage rates of the resulting intervals for normal data, but the results are disappointing; standard intervals show considerable overinclusion, intervals based on Edgeworth approximations or random weighting do not perform well, and while a bootstrap approach provides intervals with coverage rates closer to the nominal ones there is still marked underinclusion. We then turn to intervals constructed from 632 bootstrap estimates, and show that much better results are obtained. Although there is now some overinclusion, particularly for large training samples, the actual coverage rates are sufficiently close to the nominal rates for the method to be recommended. An application to real data illustrates the considerable variability that can arise in practical estimation of error rates.  相似文献   

19.
In general, growth models are adjusted under the assumptions that the error terms are homoscedastic and normally distributed. However, these assumptions are often not verified in practice. In this work we propose four growth models (Morgan–Mercer–Flodin, von Bertalanffy, Gompertz, and Richards) considering different distributions (normal, skew-normal) for the error terms and three different covariance structures. Maximum likelihood estimation procedure is addressed. A simulation study is performed in order to verify the appropriateness of the proposed growth curve models. The methodology is also illustrated on a real dataset.  相似文献   

20.
Flexible designs offer a large amount of flexibility in clinical trials with control of the type I error rate. This allows the combination of trials from different clinical phases of a drug development process. Such combinations require designs where hypotheses are selected and/or added at interim analysis without knowing the selection rule in advance so that both flexibility and multiplicity issues arise. The paper reviews the basic principles and some of the common methods for reaching flexibility while controlling the family-wise error rate in the strong sense. Flexible designs have been criticized because they may lead to different weights for the patients from the different stages when reassessing sample sizes. Analyzing the data in a conventional way avoids such unequal weighting but may inflate the multiple type I error rate. In cases where the conditional type I error rates of the new design (and conventional analysis) are below the conditional type I error rates of the initial design the conventional analysis may, however, be done without inflating the type I error rate. Focusing on a parallel group design with two treatments and a common control, we use this principle to investigate when we can select one treatment, reassess sample sizes and test the corresponding null hypotheses by the conventional level alpha z-test without compromising on the multiple type I error rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号