This paper considers the optimal design problem for multivariate mixed-effects logistic models with longitudinal data. A decomposition method of the binary outcome and the penalized quasi-likelihood are used to obtain the information matrix. The D-optimality criterion based on the approximate information matrix is minimized under different cost constraints. The results show that the autocorrelation coefficient plays a significant role in the design. To overcome the dependence of the D-optimal designs on the unknown fixed-effects parameters, the Bayesian D-optimality criterion is proposed. The relative efficiencies of designs reveal that both the cost ratio and autocorrelation coefficient play an important role in the optimal designs. 相似文献
The load-sharing model has been studied since the early 1940s to account for the stochastic dependence of components in a parallel system. It assumes that, as components fail one by one, the total workload applied to the system is shared by the remaining components and thus affects their performance. Such dependent systems have been studied in many engineering applications which include but are not limited to fiber composites, manufacturing, power plants, workload analysis of computing, software and hardware reliability, etc. Many statistical models have been proposed to analyze the impact of each redistribution of the workload; i.e., the changes on the hazard rate of each remaining component. However, they do not consider how long a surviving component has worked for prior to the redistribution. We name such load-sharing models as memoryless. To remedy this potential limitation, we propose a general framework for load-sharing models that account for the work history. Through simulation studies, we show that an inappropriate use of the memoryless assumption could lead to inaccurate inference on the impact of redistribution. Further, a real-data example of plasma display devices is analyzed to illustrate our methods.
A new set of alternative socioeconomic scenarios for climate change researches—the shared socioeconomic pathways (SSPs)—includes for the first time a more comprehensive set of demographic conditions on population, urbanization, and education as the central scenario elements, along with other aspects of society, in order to facilitate better analyses of challenges to climate change mitigation and adaptation. However, it also raises a new question about the internal consistency of assumptions on different demographic and economic trends under each SSP. This paper examines whether the interactions between the demographic and economic factors implied by the assumptions in the SSP projections are consistent with the research literature, and whether they are consistently represented in the projection results. Our analysis shows that the interactions implied by the demographic assumptions in the SSPs are generally consistent with findings from the literature, and the majority of the assumed relationships are also evident in the projected trends. It also reveals some inconsistency issues, resulting mainly from the use of inconsistent definitions of regions and limitations in our understanding of future changes in the patterns of interactions at different stages of socioeconomic development. Finally, we offer recommendations on how to improve demographic assumptions in the extended SSPs, and how to use the projections of SSP central elements in climate change research. 相似文献
As flood risks grow worldwide, a well‐designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood‐loss‐sharing program involving private insurance based on location‐specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS‐based flood model and a stochastic optimization procedure with respect to location‐specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile‐related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures. 相似文献
We investigate the convergence rates of uniform bias-corrected confidence intervals for a smooth curve using local polynomial regression for both the interior and boundary region. We discuss the cases when the degree of the polynomial is odd and even. The uniform confidence intervals are based on the volume-of-tube formula modified for biased estimators. We empirically show that the proposed uniform confidence intervals attain, at least approximately, nominal coverage. Finally, we investigate the performance of the volume-of-tube based confidence intervals for independent non-Gaussian errors. 相似文献
In modeling disease transmission, contacts are assumed to have different infection rates. A proper simulation must model the heterogeneity in the transmission rates. In this article, we present a computationally efficient algorithm that can be applied to a population with heterogeneous transmission rates. We conducted a simulation study to show that the algorithm is more efficient than other algorithms for sampling the disease transmission in a subset of the heterogeneous population. We use a valid stochastic model of pandemic influenza to illustrate the algorithm and to estimate the overall infection attack rates of influenza A (H1N1) in a Canadian city. 相似文献
Quasi-likelihood nonlinear models (QLNMs) are an extension of generalized linear model and include a widen class of models as special cases. This article investigates some diagnostic methods in QLNMs. An equivalency between a case-deletion model and a mean-shift outlier model in QLNM is established. Two simulation study and a real dataset are used to illustrate the proposed diagnostic methods. 相似文献