首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
管理学   1篇
统计学   15篇
  2021年   2篇
  2018年   1篇
  2015年   1篇
  2013年   3篇
  2012年   4篇
  2011年   1篇
  2008年   1篇
  2006年   1篇
  2004年   1篇
  1977年   1篇
排序方式: 共有16条查询结果,搜索用时 15 毫秒
1.
In this paper, asymptotic relative efficiency (ARE) of Wald tests for the Tweedie class of models with log-linear mean, is considered when the aux¬iliary variable is measured with error. Wald test statistics based on the naive maximum likelihood estimator and on a consistent estimator which is obtained by using Nakarnura's (1990) corrected score function approach are defined. As shown analytically, the Wald statistics based on the naive and corrected score function estimators are asymptotically equivalents in terms of ARE. On the other hand, the asymptotic relative efficiency of the naive and corrected Wald statistic with respect to the Wald statistic based on the true covariate equals to the square of the correlation between the unobserved and the observed co-variate. A small scale numerical Monte Carlo study and an example illustrate the small sample size situation.  相似文献   
2.
One of the greatest challenges related to the use of piecewise exponential models (PEMs) is to find an adequate grid of time-points needed in its construction. In general, the number of intervals in such a grid and the position of their endpoints are ad-hoc choices. We extend previous works by introducing a full Bayesian approach for the piecewise exponential model in which the grid of time-points (and, consequently, the endpoints and the number of intervals) is random. We estimate the failure rates using the proposed procedure and compare the results with the non-parametric piecewise exponential estimates. Estimates for the survival function using the most probable partition are compared with the Kaplan-Meier estimators (KMEs). A sensitivity analysis for the proposed model is provided considering different prior specifications for the failure rates and for the grid. We also evaluate the effect of different percentage of censoring observations in the estimates. An application to a real data set is also provided. We notice that the posteriors are strongly influenced by prior specifications, mainly for the failure rates parameters. Thus, the priors must be fairly built, say, really disclosing the expert prior opinion.  相似文献   
3.
A novel fully Bayesian approach for modeling survival data with explanatory variables using the Piecewise Exponential Model (PEM) with random time grid is proposed. We consider a class of correlated Gamma prior distributions for the failure rates. Such prior specification is obtained via the dynamic generalized modeling approach jointly with a random time grid for the PEM. A product distribution is considered for modeling the prior uncertainty about the random time grid, turning possible the use of the structure of the Product Partition Model (PPM) to handle the problem. A unifying notation for the construction of the likelihood function of the PEM, suitable for both static and dynamic modeling approaches, is considered. Procedures to evaluate the performance of the proposed model are provided. Two case studies are presented in order to exemplify the methodology. For comparison purposes, the data sets are also fitted using the dynamic model with fixed time grid established in the literature. The results show the superiority of the proposed model.  相似文献   
4.
It is common to have experiments in which it is not possible to observe the exact lifetimes but only the interval where they occur. This sort of data presents a high number of ties and it is called grouped or interval-censored survival data. Regression methods for grouped data are available in the statistical literature. The regression structure considers modeling the probability of a subject's survival past a visit time conditional on his survival at the previous visit. Two approaches are presented: assuming that lifetimes come from (1) a continuous proportional hazards model and (2) a logistic model. However, there may be situations in which none of the models are adequate for a particular data set. This article proposes the generalized log-normal model as an alternative model for discrete survival data. This model was introduced by Chen (1995 Chen , G. ( 1995 ). Generalized Log-normal distributions with reliability application . Comput. Stat. Data Anal. 19 : 300319 . [Google Scholar]) and it is extended in this article for grouped survival data. A real example related to a Chagas disease illustrates the proposed model.  相似文献   
5.
EA Lowe  AM Tinker 《Omega》1977,5(2):173-183
Using a general systems rationale, this paper develops a theoretical structure for approaching the problem of management accounting. The management control problem is explicated in terms of maintaining a relationship between the enterprise's structure and its environment. An enterprise's structure is composed of three elements (and their inter-relations): a decision and control subsystem; a financial funds subsystem and an operating (physical transforms) subsystem. Portrayed in these terms, the problem of management accounting is shown to require a methodology which is able to take cognizance of economic, sociological, psychological and other aspects of the enterprise system. The model described here provides a general intellectual frame of reference for ordering the problem.  相似文献   
6.
We consider the problem of adjusting a machine that manufactures parts in batches or lots and experiences random offsets or shifts whenever a set-up operation takes place between lots. The existing procedures for adjusting set-up errors in a production process over a set of lots are based on the assumption of known process parameters. In practice, these parameters are usually unknown, especially in short-run production. Due to this lack of knowledge, adjustment procedures such as Grubbs' (1954, 1983) rules and discrete integral controllers (also called EWMA controllers) aimed at adjusting for the initial offset in each single lot, are typically used. This paper presents an approach for adjusting the initial machine offset over a set of lots when the process parameters are unknown and are iteratively estimated using Markov Chain Monte Carlo (MCMC). As each observation becomes available, a Gibbs Sampler is run to estimate the parameters of a hierarchical normal means model given the observations up to that point in time. The current lot mean estimate is then used for adjustment. If used over a series of lots, the proposed method allows one eventually to start adjusting the offset before producing the first part in each lot. The method is illustrated with application to two examples reported in the literature. It is shown how the proposed MCMC adjusting procedure can outperform existing rules based on a quadratic off-target criterion.  相似文献   
7.
Lifetime Data Analysis - Models for situations where some individuals are long-term survivors, immune or non-susceptible to the event of interest, are extensively studied in biomedical research....  相似文献   
8.
In some survival studies, the exact time of the event of interest is unknown, but the event is known to have occurred during a particular period of time (interval-censored data). If the diagnostic tool used to detect the event of interest is not perfectly sensitive and specific, outcomes may be mismeasured; a healthy subject may be diagnosed as sick and a sick one may be diagnosed as healthy. In such cases, traditional survival analysis methods produce biased estimates for the time-to-failure distribution parameters (Paggiaro and Torelli 2004 Paggiaro, A., and N. Torelli. 2004. The effect of classification errors in survival data analysis. Statistical Methods and Applications 13:21325.[Crossref] [Google Scholar]). In this context, we developed a parametric model that incorporates sensitivity and specificity into a grouped survival data analysis (a case of interval-censored data in which all subjects are tested at the same predetermined time points). Inferential aspects and properties of the methodology, such as the likelihood function and identifiability, are discussed in this article. Assuming known and non differential misclassification, Monte Carlo simulations showed that the proposed model performed well in the case of mismeasured outcomes; the estimates of the relative bias of the model were lower than those provided by the naive method that assumes perfect sensitivity and specificity. The proposed methodology is illustrated by a study related to mango tree lifetimes.  相似文献   
9.
Determination of preventive maintenance is an important issue for systems under degradation. A typical maintenance policy calls for complete preventive repair actions at pre-scheduled times and minimal repair actions whenever a failure occurs. Under minimal repair, failures are modeled according to a non homogeneous Poisson process. A perfect preventive maintenance restores the system to the as good as new condition. The motivation for this article was a maintenance data set related to power switch disconnectors. Two different types of failures could be observed for these systems according to their causes. The major difference between these types of failures is their costs. Assuming that the system will be in operation for an infinite time, we find the expected cost per unit of time for each preventive maintenance policy and hence obtain the optimal strategy as a function of the processes intensities. Assuming a parametrical form for the intensity function, large sample estimates for the optimal maintenance check points are obtained and discussed.  相似文献   
10.
The main object of this paper is to discuss properties of the score statistics for testing the null hypothesis of no association in Weibull model with measurement errors. Three different score statistics are considered. The efficient score statistics, a naive score statistics obtained by replacing the unobserved true covariate with the observed one and a score statistics based on the corrected score statistics. It is shown that corrected and naive score statistics are equivalent and the asymptotic relative efficiency between naive and efficient score statistics is derived.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号