首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Modeling data that are non-normally distributed with random effects is the major challenge in analyzing binomial data in split-plot designs. Seven methods for analyzing such data using mixed, generalized linear, or generalized linear mixed models are compared for the size and power of the tests. This study shows that analyzing random effects properly is more important than adjusting the analysis for non-normality. Methods based on mixed and generalized linear mixed models hold Type I error rates better than generalized linear models. Mixed model methods tend to have higher power than generalized linear mixed models when the sample size is small.  相似文献   

2.
Abstract

Sliced average variance estimation (SAVE) is one of the best methods for estimating central dimension-reduction subspace in semi parametric regression models when covariates are normal. In recent days SAVE is being used to analyze DNA microarray data especially in tumor classification but most important drawback is normality of covariates. In this article, the asymptotic behavior of estimates of CDR space under varying slice size is studied through simulation studies when covariates are non normal but follows linearity condition as well as when covariates slightly perturbed from normal distribution and we observed that serious error may occur under violation normality assumption.  相似文献   

3.
Since the 1960s the Bayesian case against frequentist inference has been partly built on several “classic” examples which are devised to show how frequentist inference procedures can give rise to fallacious results; see Berger and Wolpert (1988) [2]. The primary aim of this note is to revisit one of these examples, the Berger location model, that is supposed to demonstrate the fallaciousness of frequentist Confidence Interval (CI) estimation. A closer look at the example, however, reveals that the fallacious results stem primarily from the problematic nature of the example itself, since it is based on a non-regular probability model that enables one to (indirectly) assign probabilities to the unknown parameter. Moreover, the proposed confidence set is not a proper frequentist CI in the sense that it is not defined in terms of legitimate error probabilities.  相似文献   

4.
5.
We aimed to determine the most proper change measure among simple difference, percent, or symmetrized percent changes in simple paired designs. For this purpose, we devised a computer simulation program. Since distributions of percent and symmetrized percent change values are skewed and bimodal, paired t-test did not give good results according to Type I error and the test power. To be to able use percent change or symmetrized percent change as change measure, either the distribution of test statistics should be transformed to a known theoretical distribution by transformation methods or a new test statistic for these values should be developed.  相似文献   

6.
This article develops a statistical test for the presence of a jump in an otherwise smooth transition process. In this testing, the null model is a threshold regression and the alternative model is a smooth transition model. We propose a quasi-Gaussian likelihood ratio statistic and provide its asymptotic distribution, which is defined as the maximum of a two parameter Gaussian process with a nonzero bias term. Asymptotic critical values can be tabulated and depend on the transition function employed. A simulation method to compute empirical critical values is also developed. Finite-sample performance of the test is assessed via Monte Carlo simulations. The test is applied to investigate the dynamics of racial segregation within cities across the United States.  相似文献   

7.
Abstract

Serialists have long believed their field is underrepresented in the library and information science (LIS) curriculum. A recent review of Web sites of ALA-accredited LIS programs shows no significant change in the percentage of formal serials courses in those programs. The problem of adequate formal serials education is examined in the broader context of LIS education as a whole. Increasing traditional, formal serials education is an impractical goal. Instead, we should develop continuing education opportunities, and work to dispel some of the mystique of serials.  相似文献   

8.
Four stabbings to death in a single day. Ninety murders in 7 months. Shocking figures—or are they? Knife crime makes the headlines almost daily but are Londoners really at increased risk of being murdered? David Spiegelhalter and Arthur Barnett investigate—and find a predictable pattern of murder.  相似文献   

9.
Hopes and expectations for the use and utility of new, emerging biomarkers in drug development have probably never been higher, especially in oncology. Biomarkers are exalted as vital patient selection tools in an effort to target those most likely to benefit from a new drug, and so to reduce development costs, lessen risk and expedite developments times. It is further hoped that biomarkers can be used as surrogate endpoints for clinical outcomes, to demonstrate effectiveness and, ultimately, to support drug approval. However, I perceive that all is not straightforward, and, particularly in terms of the promise of accelerated drug development, biomarker strategies may not in all cases deliver the advances and advantages hoped for.  相似文献   

10.
11.
12.
To obtain maximum likelihood (ML) estimation in factor analysis (FA), we propose in this paper a novel and fast conditional maximization (CM) algorithm, which has quadratic and monotone convergence, consisting of a sequence of CM log-likelihood (CML) steps. The main contribution of this algorithm is that the closed form expression for the parameter to be updated in each step can be obtained explicitly, without resorting to any numerical optimization methods. In addition, a new ECME algorithm similar to Liu’s (Biometrika 81, 633–648, 1994) one is obtained as a by-product, which turns out to be very close to the simple iteration algorithm proposed by Lawley (Proc. R. Soc. Edinb. 60, 64–82, 1940) but our algorithm is guaranteed to increase log-likelihood at every iteration and hence to converge. Both algorithms inherit the simplicity and stability of EM but their convergence behaviors are much different as revealed in our extensive simulations: (1) In most situations, ECME and EM perform similarly; (2) CM outperforms EM and ECME substantially in all situations, no matter assessed by the CPU time or the number of iterations. Especially for the case close to the well known Heywood case, it accelerates EM by factors of around 100 or more. Also, CM is much more insensitive to the choice of starting values than EM and ECME.  相似文献   

13.
We review the weighted likelihood estimating equations methodology introduced by Markatou, Basu and Lindsay (1995). and Basu, Markatou and Lindsay (1995) and compare it, in the case of symmetric and asymmetric contamination, with Huber's M-estimators of location. The simulation study shows that the weighted likelihood estimating equations estimator is at least as competitive as Huber's M-estimators in the case of symmetric contamination. In the case of asymmetric contamination it may be superior than Huber's M-estimators  相似文献   

14.
15.
16.
A study of twenty-seven fields in 350 highly ranked universities examines the relationship between reputation and rank. We find that many metrics associated with research prowess significantly correlate to university reputation. However, the next logical step– looking at the relationship that links different academic fields with the reputation of the university–did not always offer the expected results. The phrase “publish or perish” clearly has very different meanings in different fields.  相似文献   

17.
18.
Summary.  The paper examines the capital structure adjustment dynamics of listed non-financial corporations in seven east Asian countries before, during and after the crisis of 1997–1998. Our methodology allows for speeds of adjustment to vary, not only among firms, but also over time, distinguishing between cases of sudden and smooth adjustment. Whereas, compared with firms in the least affected countries, average leverages were much higher, generalized method-of-moments analysis of the Worldscope panel data suggests that average speeds of adjustment were lower in the worst affected countries. This holds also for the severely financially distressed firms in some worst affected countries, though the trend reversed in the post-crisis period. These findings have important implications for the regulatory environment as well as access to market finance.  相似文献   

19.
20.
Summary.  In many countries, caseworkers in public employment offices have dual roles of counselling and monitoring unemployed people. These roles often conflict, which results in important caseworker heterogeneity: some consider providing services to their clients and satisfying their demands as their primary task. However, others may pursue their own strategies, even against the will of the unemployed person. They may assign jobs and labour market programmes without the consent of the unemployed person. On the basis of a very detailed linked jobseeker–caseworker data set for Switzerland, we investigate the effects of caseworkers' co-operativeness on the probabilities of employment of their clients. Modified statistical matching methods reveal that caseworkers who place less emphasis on a co-operative and harmonic relationship with their clients increase their chances of employment in the short and medium term.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号