首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this article, the authors consider a semiparametric additive hazards regression model for right‐censored data that allows some censoring indicators to be missing at random. They develop a class of estimating equations and use an inverse probability weighted approach to estimate the regression parameters. Nonparametric smoothing techniques are employed to estimate the probability of non‐missingness and the conditional probability of an uncensored observation. The asymptotic properties of the resulting estimators are derived. Simulation studies show that the proposed estimators perform well. They motivate and illustrate their methods with data from a brain cancer clinical trial. The Canadian Journal of Statistics 38: 333–351; 2010 © 2010 Statistical Society of Canada  相似文献   

2.
The authors propose to estimate nonlinear small area population parameters by using the empirical Bayes (best) method, based on a nested error model. They focus on poverty indicators as particular nonlinear parameters of interest, but the proposed methodology is applicable to general nonlinear parameters. They use a parametric bootstrap method to estimate the mean squared error of the empirical best estimators. They also study small sample properties of these estimators by model‐based and design‐based simulation studies. Results show large reductions in mean squared error relative to direct area‐specific estimators and other estimators obtained by “simulated” censuses. The authors also apply the proposed method to estimate poverty incidences and poverty gaps in Spanish provinces by gender with mean squared errors estimated by the mentioned parametric bootstrap method. For the Spanish data, results show a significant reduction in coefficient of variation of the proposed empirical best estimators over direct estimators for practically all domains. The Canadian Journal of Statistics 38: 369–385; 2010 © 2010 Statistical Society of Canada  相似文献   

3.
For randomly censored data, the authors propose a general class of semiparametric median residual life models. They incorporate covariates in a generalized linear form while leaving the baseline median residual life function completely unspecified. Despite the non‐identifiability of the survival function for a given median residual life function, a simple and natural procedure is proposed to estimate the regression parameters and the baseline median residual life function. The authors derive the asymptotic properties for the estimators, and demonstrate the numerical performance of the proposed method through simulation studies. The median residual life model can be easily generalized to model other quantiles, and the estimation method can also be applied to the mean residual life model. The Canadian Journal of Statistics 38: 665–679; 2010 © 2010 Statistical Society of Canada  相似文献   

4.
Motivated by time series of atmospheric concentrations of certain pollutants the authors develop bent‐cable regression for autocorrelated errors. Bent‐cable regression extends the popular piecewise linear (broken‐stick) model, allowing for a smooth change region of any non‐negative width. Here the authors consider autoregressive noise added to a bent‐cable mean structure, with unknown regression and time series parameters. They develop asymptotic theory for conditional least‐squares estimation in a triangular array framework, wherein each segment of the bent cable contains an increasing number of observations while the autoregressive order remains constant as the sample size grows. They explore the theory in a simulation study, develop implementation details, apply the methodology to the motivating pollutant dataset, and provide a scientific interpretation of the bent‐cable change point not discussed previously. The Canadian Journal of Statistics 38: 386–407; 2010 © 2010 Statistical Society of Canada  相似文献   

5.
Prior sensitivity analysis and cross‐validation are important tools in Bayesian statistics. However, due to the computational expense of implementing existing methods, these techniques are rarely used. In this paper, the authors show how it is possible to use sequential Monte Carlo methods to create an efficient and automated algorithm to perform these tasks. They apply the algorithm to the computation of regularization path plots and to assess the sensitivity of the tuning parameter in g‐prior model selection. They then demonstrate the algorithm in a cross‐validation context and use it to select the shrinkage parameter in Bayesian regression. The Canadian Journal of Statistics 38:47–64; 2010 © 2010 Statistical Society of Canada  相似文献   

6.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

7.
Generally, the semiclosed-form option pricing formula for complex financial models depends on unobservable factors such as stochastic volatility and jump intensity. A popular practice is to use an estimate of these latent factors to compute the option price. However, in many situations this plug-and-play approximation does not yield the appropriate price. This article examines this bias and quantifies its impacts. We decompose the bias into terms that are related to the bias on the unobservable factors and to the precision of their point estimators. The approximated price is found to be highly biased when only the history of the stock price is used to recover the latent states. This bias is corrected when option prices are added to the sample used to recover the states' best estimate. We also show numerically that such a bias is propagated on calibrated parameters, leading to erroneous values. The Canadian Journal of Statistics 48: 8–35; 2020 © 2019 Statistical Society of Canada  相似文献   

8.
Testing for stochastic order among K populations is a common and important problem in statistical practice. It arises in the analysis of both planned experiments and observational studies. The authors develop a new nonparametric test for order among K populations that can accommodate any stochastic ordering. The test is based on a maximally selected chi‐bar‐square statistic. The authors find its limiting distribution and use simulations to derive critical values. Three important examples are used to illustrate the applicability of the general method. The authors find that the new tests outperform the existing methods in many practical cases. The Canadian Journal of Statistics 38: 97–115; 2010 © 2009 Statistical Society of Canada  相似文献   

9.
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. Longitudinal data are often analyzed through the generalized estimating equations (GEE) approach. The vast majority of existing literature on the GEE method; however, is developed under non‐survey settings and are inappropriate for data collected through complex sampling designs. In this paper the authors develop a pseudo‐GEE approach for the analysis of survey data. They show that survey weights must and can be appropriately accounted in the GEE method under a joint randomization framework. The consistency of the resulting pseudo‐GEE estimators is established under the proposed framework. Linearization variance estimators are developed for the pseudo‐GEE estimators when the finite population sampling fractions are small or negligible, a scenario often held for large‐scale surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study using data from the National Longitudinal Survey of Children and Youth. The results show that the pseudo‐GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous and binary responses. The Canadian Journal of Statistics 38: 540–554; 2010 © 2010 Statistical Society of Canada  相似文献   

10.
Using survey weights, You & Rao [You and Rao, The Canadian Journal of Statistics 2002; 30, 431–439] proposed a pseudo‐empirical best linear unbiased prediction (pseudo‐EBLUP) estimator of a small area mean under a nested error linear regression model. This estimator borrows strength across areas through a linking model, and makes use of survey weights to ensure design consistency and preserve benchmarking property in the sense that the estimators add up to a reliable direct estimator of the mean of a large area covering the small areas. In this article, a second‐order approximation to the mean squared error (MSE) of the pseudo‐EBLUP estimator of a small area mean is derived. Using this approximation, an estimator of MSE that is nearly unbiased is derived; the MSE estimator of You & Rao [You and Rao, The Canadian Journal of Statistics 2002; 30, 431–439] ignored cross‐product terms in the MSE and hence it is biased. Empirical results on the performance of the proposed MSE estimator are also presented. The Canadian Journal of Statistics 38: 598–608; 2010 © 2010 Statistical Society of Canada  相似文献   

11.
Ghoudi, Khoudraji & Rivest [The Canadian Journal of Statistics 1998;26:187–197] showed how to test whether the dependence structure of a pair of continuous random variables is characterized by an extreme‐value copula. The test is based on a U‐statistic whose finite‐ and large‐sample variance are determined by the present authors. They propose estimates of this variance which they compare to the jackknife estimate of Ghoudi, Khoudraji & Rivest ( 1998 ) through simulations. They study the finite‐sample and asymptotic power of the test under various alternatives. They illustrate their approach using financial and geological data. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

12.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

13.
This paper studies generalized linear mixed models (GLMMs) for the analysis of geographic and temporal variability of disease rates. This class of models adopts spatially correlated random effects and random temporal components. Spatio‐temporal models that use conditional autoregressive smoothing across the spatial dimension and autoregressive smoothing over the temporal dimension are developed. The model also accommodates the interaction between space and time. However, the effect of seasonal factors has not been previously addressed and in some applications (e.g., health conditions), these effects may not be negligible. The authors incorporate the seasonal effects of month and possibly year as part of the proposed model and estimate model parameters through generalized estimating equations. The model provides smoothed maps of disease risk and eliminates the instability of estimates in low‐population areas while maintaining geographic resolution. They illustrate the approach using a monthly data set of the number of asthma presentations made by children to Emergency Departments (EDs) in the province of Alberta, Canada, during the period 2001–2004. The Canadian Journal of Statistics 38: 698–715; 2010 © 2010 Statistical Society of Canada  相似文献   

14.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

15.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

17.
Recent work on point processes includes studying posterior convergence rates of estimating a continuous intensity function. In this article, convergence rates for estimating the intensity function and change‐point are derived for the more general case of a piecewise continuous intensity function. We study the problem of estimating the intensity function of an inhomogeneous Poisson process with a change‐point using non‐parametric Bayesian methods. An Markov Chain Monte Carlo (MCMC) algorithm is proposed to obtain estimates of the intensity function and the change‐point which is illustrated using simulation studies and applications. The Canadian Journal of Statistics 47: 604–618; 2019 © 2019 Statistical Society of Canada  相似文献   

18.
For binomial data analysis, many methods based on empirical Bayes interpretations have been developed, in which a variance‐stabilizing transformation and a normality assumption are usually required. To achieve the greatest model flexibility, we conduct nonparametric Bayesian inference for binomial data and employ a special nonparametric Bayesian prior—the Bernstein–Dirichlet process (BDP)—in the hierarchical Bayes model for the data. The BDP is a special Dirichlet process (DP) mixture based on beta distributions, and the posterior distribution resulting from it has a smooth density defined on [0, 1]. We examine two Markov chain Monte Carlo procedures for simulating from the resulting posterior distribution, and compare their convergence rates and computational efficiency. In contrast to existing results for posterior consistency based on direct observations, the posterior consistency of the BDP, given indirect binomial data, is established. We study shrinkage effects and the robustness of the BDP‐based posterior estimators in comparison with several other empirical and hierarchical Bayes estimators, and we illustrate through examples that the BDP‐based nonparametric Bayesian estimate is more robust to the sample variation and tends to have a smaller estimation error than those based on the DP prior. In certain settings, the new estimator can also beat Stein's estimator, Efron and Morris's limited‐translation estimator, and many other existing empirical Bayes estimators. The Canadian Journal of Statistics 40: 328–344; 2012 © 2012 Statistical Society of Canada  相似文献   

19.
We show that in a discrete price and discrete time model for option pricing, specifically that given by the Cox–Ross–Rubinstein model, the arbitrage price of a European call option can depend on parameters other than volatility (the standard deviation of the log asset price). We provide two theorems to illustrate this phenomenon. Our first theorem considers two securities with the same volatility so that at a specified time n0, with probability near 1, the two securities are equal. If their call options differ, both the discounted securities will be martingales. Our second theorem considers two securities with the same volatility so that at times n = 0, ..., N ? 1 the securities are equal with probability near 1. If their call options differ, one of the discounted securities will be a martingale and the other discounted security will be a supermartingale.  相似文献   

20.
《随机性模型》2013,29(2):215-245
In this paper, we study the problem of European Option Pricing in a market with short-selling constraints and transaction costs having a very general form. We consider two types of proportional costs and a strictly positive fixed cost. We study the problem within the framework of the theory of stochastic impulse control. We show that determining the price of a European option involves calculating the value functions of two stochastic impulse control problems. We obtain explicit expressions for the quasi-variational inequalities satisfied by the value functions and derive the solution in the case where the parameters of the price processes are constants and the investor's utility function is linear. We use this result to obtain a price for a call option on the stock and prove that this price is a nontrivial lower bound on the hedging price of the call option in the presence of general transaction costs and short-selling constraints. We then consider the situation where the investor's utility function has a general form and characterize the value function as the pointwise limit of an increasing sequence of solutions to associated optimal stopping problems. We thereby devise a numerical procedure to calculate the option price in this general setting and implement the procedure to calculate the option price for the class of exponential utility functions. Finally, we carry out a qualitative investigation of the option prices for exponential and linear-power utility functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号