首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
ABSTRACT

Kernel estimation is a popular approach to estimation of the pair correlation function which is a fundamental spatial point process characteristic. Least squares cross validation was suggested by Guan [A least-squares cross-validation bandwidth selection approach in pair correlation function estimations. Statist Probab Lett. 2007;77(18):1722–1729] as a data-driven approach to select the kernel bandwidth. The method can, however, be computationally demanding for large point pattern data sets. We suggest a modified least squares cross validation approach that is asymptotically equivalent to the one proposed by Guan but is computationally much faster.  相似文献   

2.
Kernel smoothing of spatial point data can often be improved using an adaptive, spatially varying bandwidth instead of a fixed bandwidth. However, computation with a varying bandwidth is much more demanding, especially when edge correction and bandwidth selection are involved. This paper proposes several new computational methods for adaptive kernel estimation from spatial point pattern data. A key idea is that a variable-bandwidth kernel estimator for d-dimensional spatial data can be represented as a slice of a fixed-bandwidth kernel estimator in \((d+1)\)-dimensional scale space, enabling fast computation using Fourier transforms. Edge correction factors have a similar representation. Different values of global bandwidth correspond to different slices of the scale space, so that bandwidth selection is greatly accelerated. Potential applications include estimation of multivariate probability density and spatial or spatiotemporal point process intensity, relative risk, and regression functions. The new methods perform well in simulations and in two real applications concerning the spatial epidemiology of primary biliary cirrhosis and the alarm calls of capuchin monkeys.  相似文献   

3.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

4.
ABSTRACT

In incident cohort studies, survival data often include subjects who have had an initiate event at recruitment and may potentially experience two successive events (first and second) during the follow-up period. When disease registries or surveillance systems collect data based on incidence occurring within a specific calendar time interval, the initial event is usually subject to double truncation. Furthermore, since the second duration process is observable only if the first event has occurred, double truncation and dependent censoring arise. In this article, under the two sampling biases with an unspecified distribution of truncation variables, we propose a nonparametric estimator of the joint survival function of two successive duration times using the inverse-probability-weighted (IPW) approach. The consistency of the proposed estimator is established. Based on the estimated marginal survival functions, we also propose a two-stage estimation procedure for estimating the parameters of copula model. The bootstrap method is used to construct confidence interval. Numerical studies demonstrate that the proposed estimation approaches perform well with moderate sample sizes.  相似文献   

5.
《随机性模型》2013,29(2):129-147
Abstract

This paper proposes a simple, partial equilibrium model for studying an individual's migration decisions. It shows that an individual may choose to delay migration when the condition appears to be favorable, giving rise to the “waiting” behavior observed in the data. Using a closed-form solution, it also examines how the duration of the waiting is affected by a number of economic factors such as the risks associated with the wages in regions of origin and destination, the individual's attitude toward risk, etc.  相似文献   

6.
ABSTRACT

Cox proportional hazards regression model has been widely used to estimate the effect of a prognostic factor on a time-to-event outcome. In a survey of survival analyses in cancer journals, it was found that only 5% of studies using Cox proportional hazards model attempted to verify the underlying assumption. Usually an estimate of the treatment effect from fitting a Cox model was reported without validation of the proportionality assumption. It is not clear how such an estimate should be interpreted if the proportionality assumption is violated. In this article, we show that the estimate of treatment effect from a Cox regression model can be interpreted as a weighted average of the log-scaled hazard ratio over the duration of study. A hypothetic example is used to explain the weights.  相似文献   

7.
Of the 324 petroleum refineries operating in the U.S. in 1982, only 149 were still in the hands of their original owners in 2007. Using duration analysis, this paper explores why refineries change ownership or shut down. Plants are more likely to ‘survive’ with their original owners if they are older or larger, but less likely if the owner is a major integrated firm, or the refinery is a more technologically complex one. This latter result differs from existing research on the issue. This paper also presents a split population model to relax the general assumption of the duration model that all refiners will eventually close down; the empirical results show that the split population model converges on a standard hazard model; the log-logistic version fits best. Finally, a multinomial logit model is estimated to analyze the factors that influence the refinery plant's choices of staying open, closing, or changing ownership. Plant size, age and technology usage have positive impacts on the likelihood that a refinery will stay open, or change ownership (rather than close down).  相似文献   

8.

This paper investigates the results of simulations from which clustered binary dose-response data are generated. This data mimics the type of discrete data collected from experiments conducted in developmental toxicity studies on animals. In particular one assumption used in the design of these simulations is that hormesis exists, as evidenced by the dose-response pattern of the generated data. This implies the existence of a threshold level, as hormesis, if it exists, would exist below this level. Below the threshold level, no adverse effects above the response at the control dose level should exist. While hormesis implies several dose-response patterns below threshold, in this paper, the hormetic pattern is assumed to be U-shaped. Improving upon the design of current and past developmental studies, these simulations also include designs in which dose levels and litters (clusters of animals) are allocated in a way that increases the power for detecting hormesis, assuming it exists. The beta-binomial distribution is used to model the clustered binary data that results from responses of animals in the same litter. The results of these simulations will indicate that by altering current designs of developmental studies, this improves the ability to detect hormesis.  相似文献   

9.
We consider in this work a k-level step-stress accelerated life-test (ALT) experiment with unequal duration steps τ=(τ1, …, τk). Censoring is allowed only at the change-stress point in the final stage. An exponential failure time distribution with mean life that is a log-linear function of stress, along with a cumulative exposure model, is considered as the working model. The problem of choosing the optimal τ is addressed using the variance-optimality criterion. Under this setting, we then show that the optimal k-level step-stress ALT model with unequal duration steps reduces just to a 2-level step-stress ALT model.  相似文献   

10.
In this paper, we assume that the duration of a process has two different intrinsic components or phases which are independent. The first is the time it takes for a trade to be initiated in the market (for example, the time during which agents obtain knowledge about the market in which they are operating and accumulate information, which is coherent with Brownian motion) and the second is the subsequent time required for the trade to develop into a complete duration. Of course, if the first time is zero then the trade is initiated immediately and no initial knowledge is required. If we assume a specific compound Bernoulli distribution for the first time and an inverse Gaussian distribution for the second, the resulting convolution model has a mixture of an inverse Gaussian distribution with its reciprocal, which allows us to specify and test the unobserved heterogeneity in the autoregressive conditional duration (ACD) model.

Our proposals make it possible not only to capture various density shapes of the durations but also easily to accommodate the behaviour of the tail of the distribution and the non monotonic hazard function. The proposed model is easy to fit and characterizes the behaviour of the conditional durations reasonably well in terms of statistical criteria based on point and density forecasts.  相似文献   


11.
ABSTRACT

This study develops methods for conducting uniform inference on quantile treatment effects for sharp regression discontinuity designs. We develop a score test for the treatment significance hypothesis and Wald-type tests for the hypotheses related to treatment significance, homogeneity, and unambiguity. The bias from the nonparametric estimation is studied in detail. In particular, we show that under some conditions, the asymptotic distribution of the score test is unaffected by the bias, without under-smoothing. For situations where the conditions can be restrictive, we incorporate a bias correction into the Wald tests and account for the estimation uncertainty. We also provide a procedure for constructing uniform confidence bands for quantile treatment effects. As an empirical application, we use the proposed methods to study the effect of cash-on-hand on unemployment duration. The results reveal pronounced treatment heterogeneity and also emphasize the importance of considering the long-term unemployed.  相似文献   

12.
Abstract

Variable selection is a fundamental challenge in statistical learning if one works with data sets containing huge amount of predictors. In this artical we consider procedures popular in model selection: Lasso and adaptive Lasso. Our goal is to investigate properties of estimators based on minimization of Lasso-type penalized empirical risk with a convex loss function, in particular nondifferentiable. We obtain theorems concerning rate of convergence in estimation, consistency in model selection and oracle properties for Lasso estimators if the number of predictors is fixed, i.e. it does not depend on the sample size. Moreover, we study properties of Lasso and adaptive Lasso estimators on simulated and real data sets.  相似文献   

13.
In this paper, we consider a k-level step-stress accelerated life-testing (ALT) experiment with unequal duration steps τ=(τ1, …, τ k ). Censoring is allowed only at the change-stress point in the final stage. A general log-location-scale lifetime distribution with mean life which is a linear function of stress, along with a cumulative exposure model, is considered as the working model. Under this model, the determination of the optimal choice of τ for both Weibull and lognormal distributions are addressed using the variance–optimality criterion. Numerical results show that for a general log-location-scale distributions, the optimal k-step-stress ALT model with unequal duration steps reduces just to a 2-level step-stress ALT model.  相似文献   

14.
The aim of this paper is to show the flexibility and capacity of penalized spline smoothing as estimation routine for modelling duration time data. We analyse the unemployment behaviour in Germany between 2000 and 2004 using a massive database from the German Federal Employment Agency. To investigate dynamic covariate effects and differences between competing job markets depending on the distance between former and recent working place, a functional duration time model with competing risks is used. It is build upon a competing hazard function where some of the smooth covariate effects are allowed to vary with unemployment duration. The focus of our analysis is on contrasting the spatial, economic and individual covariate effects of the competing job markets and on analysing their general influence on the unemployed's re-employment probabilities. As a result of our analyses, we reveal differences concerning gender, age and education. We also discover an effect between the newly formed and the old West German states. Moreover, the spatial pattern between the considered job markets differs.  相似文献   

15.
ABSTRACT

Recent efforts by the American Statistical Association to improve statistical practice, especially in countering the misuse and abuse of null hypothesis significance testing (NHST) and p-values, are to be welcomed. But will they be successful? The present study offers compelling evidence that this will be an extraordinarily difficult task. Dramatic citation-count data on 25 articles and books severely critical of NHST's negative impact on good science, underlining that this issue was/is well known, did nothing to stem its usage over the period 1960–2007. On the contrary, employment of NHST increased during this time. To be successful in this endeavor, as well as restoring the relevance of the statistics profession to the scientific community in the 21st century, the ASA must be prepared to dispense detailed advice. This includes specifying those situations, if they can be identified, in which the p-value plays a clearly valuable role in data analysis and interpretation. The ASA might also consider a statement that recommends abandoning the use of p-values.  相似文献   

16.
ABSTRACT

The most important factor in kernel regression is a choice of a bandwidth. Considerable attention has been paid to extension the idea of an iterative method known for a kernel density estimate to kernel regression. Data-driven selectors of the bandwidth for kernel regression are considered. The proposed method is based on an optimally balanced relation between the integrated variance and the integrated square bias. This approach leads to an iterative quadratically convergent process. The analysis of statistical properties shows the rationale of the proposed method. In order to see statistical properties of this method the consistency is determined. The utility of the method is illustrated through a simulation study and real data applications.  相似文献   

17.
Abstract

In this article, we developed a model for a convertible item (or product) where initial form of the item converts into another product by consuming conversion cost and time both. After duration, it converts again into a new product of a different nature. It is a sequential-type conversion from initial into two other products over states. The demand pattern and deterioration rate differ at each converted state. An inventory model is developed for such a kind of sequential convertible item. Expressions for total cost and other related costs (as per states) are derived and optimal time to convert the product in different states are calculated under model assumptions. A numerical example is incorporated in support of the theoretical findings and it validates the strength of the model.  相似文献   

18.
We analyse a flexible parametric estimation technique for a competing risks (CR) model with unobserved heterogeneity, by extending a local mixed proportional hazard single risk model for continuous duration time to a local mixture CR (LMCR) model for discrete duration time. The state-specific local hazard function for the LMCR model is per definition a valid density function if we have either one or two destination states. We conduct Monte Carlo experiments to compare the estimated parameters of the LMCR model, and to compare the estimated parameters of a CR model based on a Heckman–Singer-type (HS-type) technique, with the data-generating process parameters. The Monte Carlo results show that the LMCR model performs better or at least as good as the HS-type model with respect to the estimated structure parameters in most of the cases, but relatively poorer with respect to the estimated duration-dependence parameters.  相似文献   

19.
ABSTRACT

Local linear estimator is a popularly used method to estimate the non-parametric regression functions, and many methods have been derived to estimate the smoothing parameter, or the bandwidth in this case. In this article, we propose an information criterion-based bandwidth selection method, with the degrees of freedom originally derived for non-parametric inferences. Unlike the plug-in method, the new method does not require preliminary parameters to be chosen in advance, and is computationally efficient compared to the cross-validation (CV) method. Numerical study shows that the new method performs better or comparable to existing plug-in method or CV method in terms of the estimation of the mean functions, and has lower variability than CV selectors. Real data applications are also provided to illustrate the effectiveness of the new method.  相似文献   

20.
Abstract

An exact, closed form, and easy to compute expression for the mean integrated squared error (MISE) of a kernel estimator of a normal mixture cumulative distribution function is derived for the class of arbitrary order Gaussian-based kernels. Comparisons are made with MISE of the empirical distribution function, the infeasible minimum MISE, and the uniform kernel. A simple plug-in method of simultaneously selecting the optimal bandwidth and kernel order is proposed based on a non asymptotic approximation of the unknown distribution by a normal mixture. A simulation study shows that the method provides a viable alternative to existing bandwidth selection procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号