In this paper, a new generalization of the Kumaraswamy distribution namely, the Kumaraswamy Marshall-Olkin Exponential distribution (KwMOE) is introduced and studied. Various properties are explored. The structural analysis includes various aspects such as limiting behaviour, shape properties, moments, quantiles, mean deviation, Renyi entropy, order statistics and stochastic ordering. Some useful characterizations of the family are also obtained. The method of maximum likelihood is used to estimate the model parameters. Monte Carlo simulation study is being conducted. An application to a real data set is presented for illustrative purposes. 相似文献
Kriging models have been widely used in computer experiments for the analysis of time-consuming computer codes. Based on kernels,
they are flexible and can be tuned to many situations. In this paper, we construct kernels that reproduce the computer code
complexity by mimicking its interaction structure. While the standard tensor-product kernel implicitly assumes that all interactions
are active, the new kernels are suited for a general interaction structure, and will take advantage of the absence of interaction
between some inputs. The methodology is twofold. First, the interaction structure is estimated from the data, using a first
initial standard Kriging model, and represented by a so-called FANOVA graph. New FANOVA-based sensitivity indices are introduced
to detect active interactions. Then this graph is used to derive the form of the kernel, and the corresponding Kriging model
is estimated by maximum likelihood. The performance of the overall procedure is illustrated by several 3-dimensional and 6-dimensional
simulated and real examples. A substantial improvement is observed when the computer code has a relatively high level of complexity. 相似文献
Social Indicators Research - The registers of Dublin’s parishes in the seventeenth century provide access to aspects of civic and religious life. In the registers are records of burials,... 相似文献
For over 50 years (1958–2012) the American National Election Studies (ANES) survey has been measuring citizens’ evaluations of the trustworthiness of the “government in Washington”—an indicator that has been widely used to monitor the dynamics of political trust in the US over time. However, a critical assumption in using attitudinal constructs for longitudinal research is that the meaning-and-interpretation of such items should be comparable across groups of respondents at any one point in time and across samples over time. Using multigroup confirmatory factor analysis for ordered-categorical data, we test the measurement equivalence assumption with data collected by the ANES from 1964 to 2008. The results confirm that the ANES’ political trust scale has the same basic factorial structure over time. But for two key items, several threshold parameters were found to be different across time points, indicating that the meaning-and-interpretation of these questions, and especially the question about whether the government in Washington wastes money that people pay in taxes, varies significantly over time.
Federal and other regulatory agencies often use or claim to use a weight of evidence (WoE) approach in chemical evaluation. Their approaches to the use of WoE, however, differ significantly, rely heavily on subjective professional judgment, and merit improvement. We review uses of WoE approaches in key articles in the peer‐reviewed scientific literature, and find significant variations. We find that a hypothesis‐based WoE approach, developed by Lorenz Rhomberg et al., can provide a stronger scientific basis for chemical assessment while improving transparency and preserving the appropriate scope of professional judgment. Their approach, while still evolving, relies on the explicit specification of the hypothesized basis for using the information at hand to infer the ability of an agent to cause human health impacts or, more broadly, affect other endpoints of concern. We describe and endorse such a hypothesis‐based WoE approach to chemical evaluation. 相似文献
Using data from the Major League Baseball free‐agent market, this study is the first to show that the productivity expected of the team a worker will join produces a significant, negative compensating wage differential. The younger workers in the sample drive this result, trading 25% of their wages to join teams with an expected productivity one standard deviation higher. This investment can be recouped if a reasonable increase in human capital occurs. These results are robust to contract length‐wage simultaneity and indicate that investment in human capital motivates the observed tradeoff, suggesting a new pathway through which human capital accumulation can affect wages. Reliable measures of workers' own past productivity and the productivity expected of a worker's future team provide key advantages to identifying these effects. (JEL J31, J24, M54) 相似文献
Connections to the sea often define the character of coastal towns. However, as migrants arrive and economic diversification occurs, views about the use of marine resources and the ocean environment can change. Using survey data from Maine, we examined whether shifting demographics affect public perceptions of marine resource uses and coastal environmental concerns. We tested resource use and environmental items against a common set of demographic, background, and place-related variables. Results indicate that the level of education and the county of residence predict Mainers?? views about different marine resource uses and ocean-related environmental issues. Political party affiliation strongly influences environmental concern but not views about the use of marine resources. Migration history, on the other hand, has little effect. Understanding community contexts as well as individual background and ideological orientations will be critical as managers attempt to balance alternative uses of marine resources and resolve coastal environmental problems. 相似文献
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs. 相似文献