首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 699 毫秒
1.
A simulation experiment compares the accuracy and precision of three alternate estimation techniques for the parameters of the STARMA model. Maximum likelihood estimation, in most ways the "best" estimation procedure, involves a large amount of computational effort so that two approximate techniques, exact least squares and conditional maximum likelihood, are often proposed for series of moderate lengths. This simulation experiment compares the accuracy of these three estimation procedures for simulated series of various lengths, and discusses the appropriateness of the three procedures as a function of the length of the observed series.  相似文献   

2.
The randomized response (RR) technique pioneered by Warner, S.L. (1965) [Randomised response: a survey technique for eliminating evasive answer bias. J. Amer. Statist. Assoc. 60 , 63–69] is a useful tool in estimating the proportion of persons in a community bearing sensitive or socially disapproved characteristics. Mangat, N.S. & Singh, R. (1990) [An alternative rendomized response procedure. Biometrika 77 , 439–442] proposed a modification of Warner's procedure by using two RR techniques. Presented here is a generalized two‐stage RR procedure and derivation of the condition under which the proposed procedure produces a more precise estimator of the population parameter. A comparative study on the performance of this two‐stage procedure and conventional RR techniques, assuming that the respondents' jeopardy level in this proposed method remains the same as that offered by the traditional RR procedures, is also reported. In addition, a numerical example compares the efficiency of the proposed method with the traditional RR procedures.  相似文献   

3.
In this paper, we reconsider the mixture vector autoregressive model, which was proposed in the literature for modelling non‐linear time series. We complete and extend the stationarity conditions, derive a matrix formula in closed form for the autocovariance function of the process and prove a result on stable vector autoregressive moving‐average representations of mixture vector autoregressive models. For these results, we apply techniques related to a Markovian representation of vector autoregressive moving‐average processes. Furthermore, we analyse maximum likelihood estimation of model parameters by using the expectation–maximization algorithm and propose a new iterative algorithm for getting the maximum likelihood estimates. Finally, we study the model selection problem and testing procedures. Several examples, simulation experiments and an empirical application based on monthly financial returns illustrate the proposed procedures.  相似文献   

4.
In the absence of quantitative clinical standards to detect serial changes in cardiograms, statistical procedures are proposed as an alternative. These procedures are preceded by a dimension reducing orthonormal transformation of the original digitized cardiogram into a lower dimensional feature space. In feature space, mul-tivariate test criteria are given for the detection of changes in covariance matrices or mean vectors of the cardiograms. The flexibility is provided to compare the cardiograms of the same individual pairwise or simultaneously. Some pertinent remarks are also made about controlling the overall level of significance and its impact on the application of these techniques to cardiograms of USAF pilots.  相似文献   

5.
Comparison with a standard is a general multiple comparison problem, where each system is required to be compared with a single system, referred to as a ‘standard’, as well as with other alternative systems. Screening procedures specially designed to be used for comparison with a standard have been proposed to find a subset that includes all the systems better than the standard in terms of the expected performance. Selection procedures are derived to determine the best system among a number of systems that are better than the standard, or to select the standard when it is equal to or better than the other alternatives. We develop new procedures for screening and selection through the use of two variance reduction techniques, common random numbers and control variates, which are particularly useful in the context of simulation experiments. Empirical results and a realistic example are also provided to compare our procedures with the existing ones.  相似文献   

6.
The aim of this paper is to present a new hybrid algorithm for pricing financial derivatives in the arithmetic Asian options. In this paper, two variance reduction techniques are combined, the multiple control variates (MCV) and the antithetic variates (AV). We propose an efficient algorithm for pricing arithmetic Asian options based on the AV and the MCV procedures. A detailed numerical study illustrates the efficiency of the proposed algorithm.  相似文献   

7.
We consider the problem of estimating the shape parameter of a Pareto distribution with unknown scale under an arbitrary strictly bowl-shaped loss function. Classes of estimators improving upon minimum risk equivariant estimator are derived by adopting Stein, Brown, and Kubokawa techniques. The classes of estimators are shown to include some known procedures such as Stein-type and Brewster and Zidek-type estimators from literature. We also provide risk plots of proposed estimators for illustration purpose.  相似文献   

8.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   

9.
In this paper the interest is in testing the null hypothesis of positive quadrant dependence (PQD) between two random variables. Such a testing problem is important since prior knowledge of PQD is a qualitative restriction that should be taken into account in further statistical analysis, for example, when choosing an appropriate copula function to model the dependence structure. The key methodology of the proposed testing procedures consists of evaluating a “distance” between a nonparametric estimator of a copula and the independence copula, which serves as a reference case in the whole set of copulas having the PQD property. Choices of appropriate distances and nonparametric estimators of copula are discussed, and the proposed methods are compared with testing procedures based on bootstrap and multiplier techniques. The consistency of the testing procedures is established. In a simulation study the authors investigate the finite sample size and power performances of three types of test statistics, Kolmogorov–Smirnov, Cramér–von‐Mises, and Anderson–Darling statistics, together with several nonparametric estimators of a copula, including recently developed kernel type estimators. Finally, they apply the testing procedures on some real data. The Canadian Journal of Statistics 38: 555–581; 2010 © 2010 Statistical Society of Canada  相似文献   

10.
Extensions of the Duncan's Multiple Range Test to group means with unequal numbers of replications are proposed. These extensions and the extensions due to Kramer (1956) and Bancroft (1968) are evaluated with the Least Significant Difference and the Scheffe (1959) test for observed Type 1 error and correct decision rates by computer simulation techniques. The performance of these procedures to varying degrees of sample-size imbalance and variance heterogeneity under normal and skewed distributions is also examined.  相似文献   

11.
The location model is a familiar basis for discriminant analysis of mixtures of categorical and continuous variables. Its usual implementation involves second-order smoothing, using multivariate regression for the continuous variables and log-linear models for the categorical variables. In spite of the smoothing, these procedures still require many parameters to be estimated and this in turn restricts the categorical variables to a small number if implementation is to be feasible. In this paper we propose non-parametric smoothing procedures for both parts of the model. The number of parameters to be estimated is dramatically reduced and the range of applicability thereby greatly increased. The methods are illustrated on several data sets, and the performances are compared with a range of other popular discrimination techniques. The proposed method compares very favourably with all its competitors.  相似文献   

12.
The counting process with the Cox-type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time-transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co-workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson-type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple-infection data taken from a clinical study on chronic granulomatous disease is also provided.  相似文献   

13.
In the context of a competing risks set-up, we discuss different inference procedures for testing equality of two cumulative incidence functions, where the data may be subject to independent right-censoring or left-truncation. To this end, we compare two-sample Kolmogorov–Smirnov- and Cramér–von Mises-type test statistics. Since, in general, their corresponding asymptotic limit distributions depend on unknown quantities, we utilize wild bootstrap resampling as well as approximation techniques to construct adequate test decisions. Here, the latter procedures are motivated from tests for heteroscedastic factorial designs but have not yet been proposed in the survival context. A simulation study shows the performance of all considered tests under various settings and finally a real data example about bloodstream infection during neutropenia is used to illustrate their application.  相似文献   

14.
This paper introduces a new bivariate exponential distribution, called the Bivariate Affine-Linear Exponential distribution, to model moderately negative dependent data. The construction and characteristics of the proposed bivariate distribution are presented along with estimation procedures for the model parameters based on maximum likelihood and objective Bayesian analysis. We derive Jeffreys prior and discuss its frequentist properties based on a simulation study and MCMC sampling techniques. A real data set of mercury concentration in largemouth bass from Florida lakes is used to illustrate the methodology.  相似文献   

15.
Outlier detection is fundamental to statistical modelling. When there are multiple outliers, many traditional approaches in use are stepwise detection procedures, which can be computationally expensive and ignore stochastic error in the outlier detection process. Outlier detection can be performed by a heteroskedasticity test. In this article, a rapid outlier detection method via multiple heteroskedasticity test based on penalized likelihood approaches is proposed to handle these kinds of problems. The proposed method detects the heteroskedasticity of all data only by one step and estimate coefficients simultaneously. The proposed approach is distinguished from others in that a rapid modelling approach uses a weighted least squares formulation coupled with nonconvex sparsity-including penalization. Furthermore, the proposed approach does not need to construct test statistics and calculate their distributions. A new algorithm is proposed for optimizing penalized likelihood functions. Favourable theoretical properties of the proposed approach are obtained. Our simulation studies and real data analysis show that the newly proposed methods compare favourably with other traditional outlier detection techniques.  相似文献   

16.
17.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

18.
Longitudinal categorical data are commonly applied in a variety of fields and are frequently analyzed by generalized estimating equation (GEE) method. Prior to making further inference based on the GEE model, the assessment of model fit is crucial. Graphical techniques have long been in widespread use for assessing the model adequacy. We develop alternative graphical approaches utilizing plots of marginal model-checking condition and local mean deviance to assess the GEE model with logit link for longitudinal binary responses. The applications of the proposed procedures are illustrated through two longitudinal binary datasets.  相似文献   

19.
20.
Specification tests for the error distribution are proposed in semi-linear models, including the partial linear model and additive models. The tests utilize an integrated distance involving the empirical characteristic function of properly estimated residuals. These residuals are obtained from an initial estimation step involving a combination of penalized least squares and smoothing techniques. A bootstrap version of the tests is utilized in order to study the small sample behavior of the procedures in comparison with more classical approaches. As an example, the tests are applied on some real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号