首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

In this paper two innovative procedures for the decomposition of the Pietra index are proposed. The first one allows the decomposition by sources, while the second one provides the decomposition by subpopulations. As special case of the latter procedure, the “classical” decomposition in two components (within and between) can be easily obtained. A remarkable feature of both the proposed procedures is that they permit the assessment of the contribution to the Pietra index at the smallest possible level: each source for the first one and each subpopulation for the second one. To highlight the usefulness of these procedures, two applications are provided regarding Italian professional football (soccer) teams.

  相似文献   

2.
Current industrial processes are sophisticated enough to be tied to only one quality variable to describe the process result. Instead, many process variables need to be analyze together to assess the process performance. In particular, multivariate process capability analysis (MPCIs) has been the focus of study during the last few decades, during which many authors proposed alternatives to build the indices. These measures are extremely attractive to people in charge of industrial processes, because they provide a single measure that summarizes the whole process performance regarding its specifications. In most practical applications, these indices are estimated from sampling information collected by measuring the variables of interest on the process outcome. This activity introduces an additional source of variation to data, that needs to be considered, regarding its effect on the properties of the indices. Unfortunately, this problem has received scarce attention, at least in the multivariate domain. In this paper, we study how the presence of measurement errors affects the properties of one of the MPCIs recommended in previous researches. The results indicate that even little measurement errors can induce distortions on the index value, leading to wrong conclusions about the process performance.  相似文献   

3.
Abstract

We use chi-squared and related pivot variables to induce probability measures for model parameters, obtaining some results that will be useful on the induced densities. As illustration we considered mixed models with balanced cross nesting and used the algebraic structure to derive confidence intervals for the variance components. A numerical application is presented.  相似文献   

4.
Summary. Many economic and social phenomena are measured by composite indicators computed as weighted averages of a set of elementary time series. Often data are collected by means of large sample surveys, and processing takes a long time, whereas the values of some elementary component series may be available a considerable time before the others and may be used for forecasting the composite index. This problem is addressed within the framework of prediction theory for stochastic processes. A method is proposed for exploiting anticipated information to minimize the mean-square forecast error, and for selecting the most useful elementary series. An application to the Italian general industrial production index is illustrated, which demonstrates that knowledge of anticipated values of some, or even just one, component series may reduce the forecast error considerably.  相似文献   

5.
We propose a segmented discrete-time model for the analysis of event history data in demographic research. Through a unified regression framework, the model provides estimates of the effects of explanatory variables and jointly accommodates flexibly non-proportional differences via segmented relationships. The main appeal relies on ready availability of parameters, changepoints, and slopes, which may provide meaningful and intuitive information on the topic. Furthermore, specific linear constraints on the slopes may also be set to investigate particular patterns. We investigate the intervals between cohabitation and first childbirth and from first to second childbirth using individual data for Italian women from the Second National Survey on Fertility. The model provides insights into dramatic decrease of fertility experienced in Italy, in that it detects a ‘common’ tendency in delaying the onset of childbearing for the more recent cohorts and a ‘specific’ postponement strictly depending on the educational level and age at cohabitation.  相似文献   

6.
The inclusion of linear deterministic effects in a time series model is important to get an appropriate specification. Such effects may be due to calendar variation, outlying observations or interventions. This article proposes a two-step method for estimating an adjusted time series and the parameters of its linear deterministic effects simultaneously. Although the main goal when applying this method in practice might only be to estimate the adjusted series, an important by-product is a substantial increase in efficiency in the estimates of the deterministic effects. Some theoretical examples are presented to demonstrate the intuitive appeal of this proposal. Then the methodology is applied on two real datasets. One of these applications investigates the importance of the 1995 economic crisis on Mexico's industrial production index.  相似文献   

7.
First the linear and the exponential paths are chosen as examples to show how the Divisia price and quantity index can be integrated numerically. Then the equation of measurement by Eichhorn and Gleissner is adapted to price measurement and used to formulate certain properties of indices. A special class of properties, the reversal tests describing certain symmetries, is considered especially, It turns out that the corresponding antitheses form a group of eight elements.  相似文献   

8.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

9.
Several survival regression models have been developed to assess the effects of covariates on failure times. In various settings, including surveys, clinical trials and epidemiological studies, missing data may often occur due to incomplete covariate data. Most existing methods for lifetime data are based on the assumption of missing at random (MAR) covariates. However, in many substantive applications, it is important to assess the sensitivity of key model inferences to the MAR assumption. The index of sensitivity to non-ignorability (ISNI) is a local sensitivity tool to measure the potential sensitivity of key model parameters to small departures from the ignorability assumption, needless of estimating a complicated non-ignorable model. We extend this sensitivity index to evaluate the impact of a covariate that is potentially missing, not at random in survival analysis, using parametric survival models. The approach will be applied to investigate the impact of missing tumor grade on post-surgical mortality outcomes in individuals with pancreas-head cancer in the Surveillance, Epidemiology, and End Results data set. For patients suffering from cancer, tumor grade is an important risk factor. Many individuals in these data with pancreas-head cancer have missing tumor grade information. Our ISNI analysis shows that the magnitude of effect for most covariates (with significant effect on the survival time distribution), specifically surgery and tumor grade as some important risk factors in cancer studies, highly depends on the missing mechanism assumption of the tumor grade. Also a simulation study is conducted to evaluate the performance of the proposed index in detecting sensitivity of key model parameters.  相似文献   

10.
Summary.  We develop Bayesian techniques for modelling the evolution of entire distributions over time and apply them to the distribution of team performance in Major League baseball for the period 1901–2000. Such models offer insight into many key issues (e.g. competitive balance) in a way that regression-based models cannot. The models involve discretizing the distribution and then modelling the evolution of the bins over time through transition probability matrices. We allow for these matrices to vary over time and across teams. We find that, with one exception, the transition probability matrices (and, hence, competitive balance) have been remarkably constant across time and over teams. The one exception is the Yankees, who have outperformed all other teams.  相似文献   

11.
A random-effects transition model is proposed to model the economic activity status of household members. This model is introduced to take into account two kinds of correlations; one due to the longitudinal nature of the study, which will be considered using a transition parameter, and the other due to the existing correlation between responses of members of the same household which is taken into account by introducing random coefficients into the model. The results are presented based on the homogeneous (all parameters are not changed by time) and non-homogeneous Markov models with random coefficients. A Bayesian approach via the Gibbs sampling is used to perform parameter estimation. Results of using random-effects transition model are compared, using deviance information criterion, with those of three other models which exclude random effects and/or transition effects. It is shown that the full model gains more precision due to the consideration of all aspects of the process which generated the data. To illustrate the utility of the proposed model, a longitudinal data set which is extracted from the Iranian Labour Force Survey is analysed to explore the simultaneous effect of some covariates on the current economic activity as a nominal response. Also, some sensitivity analyses are performed to assess the robustness of the posterior estimation of the transition parameters to the perturbations of the prior parameters.  相似文献   

12.
The statistical analysis of patient-reported outcomes (PROs) as endpoints has shown to be of great practical relevance. The resulting scores or indexes from the questionnaires used to measure PROs could be treated as continuous or ordinal. The goal of this study is to propose and evaluate a recoding process of the scores, so that they can be treated as binomial outcomes and, therefore, analyzed using logistic regression with random effects. The general methodology of recoding is based on the observable values of the scores. In order to obtain an optimal recoding, the evaluation of the recoding method is tested for different values of the parameters of the binomial distribution and different probability distributions of the random effects. We illustrate, evaluate and validate the proposed method of recoding with the Short Form-36 (SF-36) Survey and real data. The optimal recoding approach is very useful and flexible. Moreover, it has a natural interpretation, not only for ordinal scores, but also for questionnaires with many dimensions and different profiles, where a common method of analysis is desired, such as the SF-36.  相似文献   

13.
This paper derives an application of the minimum chi-squared (MCS) methodology to estimate the parameters of the unimodal symmetric stable distribution. The proposed method is especially suitable for large, both regular and non-standard, data sets. Monte Carlo simulations are performed to compare the efficiency of the MCS estimation with the efficiency of the McCulloch quantile algorithm. In the case of grouped observations, evidence in favour of the MCS method is reported. For the ungrouped data the MCS estimation generally performs better than McCulloch's quantile method for samples larger than 400 observations and for high alphas. The relative advantage of the MCS over the McCulloch estimators increases for larger samples. The empirical example analyses the highly irregular distributions of returns on the selected securities from the Warsaw Stock Exchange. The quantile and maximum likelihood estimates of characteristic exponents are generally smaller than the MCS ones. This reflects the bias in the traditional methods, which is due to a lack of adjustment for censored and clustered observations, and shows the flexibility of the proposed MCS approach.  相似文献   

14.
We propose to utilize the group lasso algorithm for logistic regression to construct a risk scoring system for predicting disease in swine. This work is motivated by the need to develop a risk scoring system from survey data on risk factor for porcine reproductive and respiratory syndrome (PRRS), which is a major health, production and financial problem for swine producers in nearly every country. Group lasso provides an attractive solution to this research question because of its ability to achieve group variable selection and stabilize parameter estimates at the same time. We propose to choose the penalty parameter for group lasso through leave-one-out cross-validation, using the criterion of the area under the receiver operating characteristic curve. Survey data for 896 swine breeding herd sites in the USA and Canada completed between March 2005 and March 2009 are used to construct the risk scoring system for predicting PRRS outbreaks in swine. We show that our scoring system for PRRS significantly improves the current scoring system that is based on an expert opinion. We also show that our proposed scoring system is superior in terms of area under the curve to that developed using multiple logistic regression model selected based on variable significance.  相似文献   

15.
Negative binomial and Poisson distributions are fitted to data on scores in Association Football for the seasons 1946–47 to 1983–84. There are strong grounds for preferring the negative binomial up to 1970; thereafter the Poisson seems adequate. Simplification is achieved by fitting the negative binomial with a common parameter. The analyses are set in the context of previous applications and interpretations in the area. Different models giving rise to the negative binomial or Poisson are investigated and some support found for models not previously advanced in this context. Notwith-standing the success of such exercises some scepticism is expressed about the interpretations placed on previous analyses.  相似文献   

16.
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature.  相似文献   

17.
The literature devoted to the export-led growth (ELG) hypothesis, which is of utmost importance for policymaking in emerging countries, provides mixed evidence for the validity of the hypothesis. Recent contributions focus on the time-dependence of the relationship between export and output growth using rolling causality techniques based on vector autoregressive models. These models focus on a short-term view which captures single policy-induced developments. However, long-term structural changes cannot be covered by examinations related to the short-term. This paper hence examines the time-varying validity of the ELG hypothesis for India for the period 1960–2011 using rolling causality techniques for both the short-run and long-run horizon. For the first time, window-wise optimal lag-selection procedures are applied in connection with these techniques. We find that exports long-run caused output growth from 1997 until 2009 which can be seen as a consequence of political reforms of the 1990s that boosted economic growth by generating foreign direct investment opportunities and higher exports. For the short-run, export significantly caused output in the period 1998–2003 which followed a concentration of liberalization measures in 1997. Causality in the reversed direction, from output to exports, only seems to be relevant in the short-run.  相似文献   

18.
One of the pivotal devices B. Traven employs in his short story 'The Cattle Drive' is a contract between the cattle owner and the trail boss who brings the livestock to market. By specifying a per-diem rate, the contract appears to encourage a wage-maximizing trail boss to delay the delivery of the cattle. However, a statistical model of the contract demonstrates that a rational trail boss has an incentive to maintain a rapid rate of travel. The article concludes that statistics can be applied in non-traditional ways such as to the analysis of the plot of a fictional story. The statistical model suggests plausible alternative endings to the story based on various parameter assumptions. Finally, it demonstrates that a well-crafted story can provide an excellent case study of how contracts create incentives and influence decision-making.  相似文献   

19.
For a trial with primary endpoint overall survival for a molecule with curative potential, statistical methods that rely on the proportional hazards assumption may underestimate the power and the time to final analysis. We show how a cure proportion model can be used to get the necessary number of events and appropriate timing via simulation. If phase 1 results for the new drug are exceptional and/or the medical need in the target population is high, a phase 3 trial might be initiated after phase 1. Building in a futility interim analysis into such a pivotal trial may mitigate the uncertainty of moving directly to phase 3. However, if cure is possible, overall survival might not be mature enough at the interim to support a futility decision. We propose to base this decision on an intermediate endpoint that is sufficiently associated with survival. Planning for such an interim can be interpreted as making a randomized phase 2 trial a part of the pivotal trial: If stopped at the interim, the trial data would be analyzed, and a decision on a subsequent phase 3 trial would be made. If the trial continues at the interim, then the phase 3 trial is already underway. To select a futility boundary, a mechanistic simulation model that connects the intermediate endpoint and survival is proposed. We illustrate how this approach was used to design a pivotal randomized trial in acute myeloid leukemia and discuss historical data that informed the simulation model and operational challenges when implementing it.  相似文献   

20.
Agreement among raters is an important issue in medicine, as well as in education and psychology. The agreement among two raters on a nominal or ordinal rating scale has been investigated in many articles. The multi-rater case with normally distributed ratings has also been explored at length. However, there is a lack of research on multiple raters using an ordinal rating scale. In this simulation study, several methods were compared with analyze rater agreement. The special case that was focused on was the multi-rater case using a bounded ordinal rating scale. The proposed methods for agreement were compared within different settings. Three main ordinal data simulation settings were used (normal, skewed and shifted data). In addition, the proposed methods were applied to a real data set from dermatology. The simulation results showed that the Kendall's W and mean gamma highly overestimated the agreement in data sets with shifts in data. ICC4 for bounded data should be avoided in agreement studies with rating scales<5, where this method highly overestimated the simulated agreement. The difference in bias for all methods under study, except the mean gamma and Kendall's W, decreased as the rating scale increased. The bias of ICC3 was consistent and small for nearly all simulation settings except the low agreement setting in the shifted data set. Researchers should be careful in selecting agreement methods, especially if shifts in ratings between raters exist and may apply more than one method before any conclusions are made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号