首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
本文认为农村税费改革并不能真正减轻农民负担,从税赋公平的角度,对我国农村税费负担进行了分析,并对我国农业税制与国际农业税制作了比较分析,提出了减轻农民负担的根本出路。  相似文献   

3.
Summary. The paper briefly examines the substantial risks of smoking, and how people's perception of them may be influenced by tobacco control policy and by the activities of the tobacco industry. The comparative lack of effectiveness of the self-regulation system of implementing tobacco control policy is noted, illustrated by the example of cigarette pack health warnings, from the first examples under the voluntary system to the significantly more robust and effective pictorial warnings system pioneered by Canada and implemented by legislation, similar to measures recently approved by the European Union. Other aspects of tobacco control policy are discussed, including health education, restricting the promotion of tobacco and changing the social acceptability of smoking. Three areas of success in the UK—taxation, the leadership of doctors and sustained media advocacy—are described; and the paper concludes by looking at prospects for the future, with the forthcoming ban on most forms of tobacco promotion and the challenge of responding to growing demands to protect non-smokers from exposure to other people's tobacco smoke in the workplace and in public places.  相似文献   

4.
The use of logistic regression analysis is widely applicable to epidemiologic studies concerned with quantifying an association between a study factor (i.e., an exposure variable) and a health outcome (i.e., disease status). This paper reviews the general characteristics of the logistic model and illustrates its use in epidemiologic inquiry. Particular emphasis is given to the control of extraneous variables in the context of follow-up and case-control studies. Techniques for both unconditional and conditional maximum likelihood estimation of the parameters in the logistic model are described and illustrated. A general analysis strategy is also presented which incorporates the assessment of both interaction and confounding in quantifying an exposure-disease association of interest.  相似文献   

5.
Four teams of analysts try to determine the existence of an association between inflammatory bowel disease and certain genetic markers on human chromosome number 6. Their investigation involves data on several control populations and on 110 familles with two or more affected individuals. The problem is introduced by Mirea, Bull, Silverberg and Siminovitch; they and three other groups (Chen, Kalbfleisch and Romero‐Hidalgo; Darlington and Paterson; Roslin, Loredo‐Osti, Greenwood and Morgan) present analyses. Their approaches are discussed by Field and Smith.  相似文献   

6.
It is important to educational planners to estimate the likelihood and time-scale of graduation of students enrolled on a curriculum. The particular case we are concerned with, emerges when studies are not completed in the prescribed interval of time. Under these circumstances we use a framework of survival analysis applied to lifetime-type educational data to examine the distribution of duration of undergraduate studies for 10,313 students, enrolled in a Greek university during ten consecutive academic years. Non-parametric and parametric survival models have been developed for handling this distribution as well as a modified procedure for testing goodness-of-fit of the models. Data censoring was taken into account in the statistical analysis and the problems of thresholding of graduation and of perpetual students are also addressed. We found that the proposed parametric model adequately describes the empirical distribution provided by non-parametric estimation. We also found significant difference between duration of studies of men and women students. The proposed methodology could be useful to analyse data from any other type and level of education or general lifetime data with similar characteristics.  相似文献   

7.
In this paper, a simulation study is conducted to systematically investigate the impact of dichotomizing longitudinal continuous outcome variables under various types of missing data mechanisms. Generalized linear models (GLM) with standard generalized estimating equations (GEE) are widely used for longitudinal outcome analysis, but these semi‐parametric approaches are only valid under missing data completely at random (MCAR). Alternatively, weighted GEE (WGEE) and multiple imputation GEE (MI‐GEE) were developed to ensure validity under missing at random (MAR). Using a simulation study, the performance of standard GEE, WGEE and MI‐GEE on incomplete longitudinal dichotomized outcome analysis is evaluated. For comparisons, likelihood‐based linear mixed effects models (LMM) are used for incomplete longitudinal original continuous outcome analysis. Focusing on dichotomized outcome analysis, MI‐GEE with original continuous missing data imputation procedure provides well controlled test sizes and more stable power estimates compared with any other GEE‐based approaches. It is also shown that dichotomizing longitudinal continuous outcome will result in substantial loss of power compared with LMM. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Survival data analysis aims at collecting data on durations spent in a state by a sample of units, in order to analyse the process of transition to a different state. Survival analysis applied to social and economic phenomena typically relies upon data on transitions collected, for a sample of units, in one or more follow-up surveys. We explore the effect of misclassification of the transition indicator on parameter estimates in an appropriate statistical model for the duration spent in an origin state. Some empirical investigations about the bias induced when ignoring misclassification are reported, extending the model to include the possibility that the rate of misclassification can vary across units according to the value of some covariates. Finally it is shown how a Bayesian approach can lead to parameter estimates.  相似文献   

9.
Principal component analysis (PCA) is a widely used statistical technique for determining subscales in questionnaire data. As in any other statistical technique, missing data may both complicate its execution and interpretation. In this study, six methods for dealing with missing data in the context of PCA are reviewed and compared: listwise deletion (LD), pairwise deletion, the missing data passive approach, regularized PCA, the expectation-maximization algorithm, and multiple imputation. Simulations show that except for LD, all methods give about equally good results for realistic percentages of missing data. Therefore, the choice of a procedure can be based on the ease of application or purely the convenience of availability of a technique.  相似文献   

10.
Data with censored initiating and terminating times arises quite frequently in acquired immunodeficiency syndrome (AIDS) epidemiologic studies. Analysis of such data involves a complicated bivariate likelihood, which is difficult to deal with computationally. Bayesian analysis, op the other hand, presents added complexities that have yet to be resolved. By exploiting the simple form of a complete data likelihood and utilizing the power of a Markov Chain Monte Carlo (MCMC) algorithm, this paper presents a methodology for fitting Bayesian regression models to such data. The proposed methods extend the work of Sinha (1997), who considered non-parametric Bayesian analysis of this type of data. The methodology is illustiated with an application to a cohort of HIV infected hemophiliac patients.  相似文献   

11.
The article focuses mainly on a conditional imputation algorithm of quantile-filling to analyze a new kind of censored data, mixed interval-censored and complete data related to interval-censored sample. With the algorithm, the imputed failure times, which are the conditional quantiles, are obtained within the censoring intervals in which some exact failure times are. The algorithm is viable and feasible for the parameter estimation with general distributions, for instance, a case of Weibull distribution that has a moment estimation of closed form by log-transformation. Furthermore, interval-censored sample is a special case of the new censored sample, and the conditional imputation algorithm can also be used to deal with the failure data of interval censored. By comparing the interval-censored data and the new censored data, using the imputation algorithm, in the view of the bias of estimation, we find that the performance of new censored data is better than that of interval censored.  相似文献   

12.
13.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

14.
Regression is the method of choice for analyzing complex salary structures, such as those of university faculty. Unfortunately, not only are there limitations on the data available and shortcomings in the method, but courts do not always understand the evidence presented to them. Statistical analysis can play an important role in uncovering discrimination, but caution is necessary in analysis and presentation.  相似文献   

15.
In this paper, we introduce a Bayesian Analysis for the Block and Basu bivariate exponential distribution using Markov Chain Monte Carlo (MCMC) methods and considering lifetimes in presence of covariates and censored data. Posterior summaries of interest are obtained using the popular WinBUGS software. Numerical illustrations are introduced considering a medical data set related to the recurrence times of infection for kidney patients and a medical data set related to bone marrow transplantation for leukemia.  相似文献   

16.
The use of general linear modeling (GLM) procedures based on log-rank scores is proposed for the analysis of survival data and compared to standard survival analysis procedures. For the comparison of two groups, this approach performed similarly to the traditional log-rank test. In the case of more complicated designs - without ties in the survival times - the approach was only marginally less powerful than tests from proportional hazards models, and clearly less powerful than a likelihood ratio test for a fully parametric model; however, with ties in the survival time, the approach proved more powerful than tests from Cox's semi-parametric proportional hazards procedure. The method appears to provide a reasonably powerful alternative for the analysis of survival data, is easily used in complicated study designs, avoids (semi-)parametric assumptions, and is quite computationally easy and inexpensive to employ.  相似文献   

17.
If interest lies in reporting absolute measures of risk from time-to-event data then obtaining an appropriate approximation to the shape of the underlying hazard function is vital. It has previously been shown that restricted cubic splines can be used to approximate complex hazard functions in the context of time-to-event data. The degree of complexity for the spline functions is dictated by the number of knots that are defined. We highlight through the use of a motivating example that complex hazard function shapes are often required when analysing time-to-event data. Through the use of simulation, we show that provided a sufficient number of knots are used, the approximated hazard functions given by restricted cubic splines fit closely to the true function for a range of complex hazard shapes. The simulation results also highlight the insensitivity of the estimated relative effects (hazard ratios) to the correct specification of the baseline hazard.  相似文献   

18.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

19.
For many questionnaires and surveys in the marketing, business, and health disciplines, items often involve ordinal scales (such as the Likert scale and rating scale) that are associated in sometimes complex ways. Techniques such as classical correspondence analysis provide a simple graphical means of describing the nature of the association. However, the procedure does not allow the researcher to specify how one item may be associated with another, nor does the analysis allow for the ordinal structure of the scales to be reflected. This article presents a graphical approach that can help the researcher to study in depth the complex association of the items and reflect the structure of the items. We will demonstrate the applicability of this approach using data collected from a study that involves identifying major factors that influence the level of patient satisfaction in a Neapolitan hospital.  相似文献   

20.
This paper discusses the specific problems of age-period-cohort (A-P-C) analysis within the general framework of interaction assessment for two-way cross-classified data with one observation per cell. The A-P-C multiple classification model containing the effects of age groups (rows), periods of observation (columns), and birth cohorts (diagonals of the two-way table) is characterized as one of a special class of models involving interaction terms assumed to have very specific forms. The so-called A-P-C identification problem, which results from the use of a particular interaction structure for detecting cohort effects, is shown to manifest itself in the form of an exact linear dependency among the columns of the design matrix. The precise relationship holding among these columns is derived, as is an explicit formula for the bias in the parameter estimates resulting from an incorrect specification of an assumed restriction on the parameters required to solve the normal equations. Current methods for modeling A-P-C data are critically reviewed, an illustrative numerical example is presented, and one potentially promising analysis strategy is discussed. However, gien the large number of possible sources for error in A-P-C analyses, it is strongly recommended that the results of such analyses be interpreted with a great deal of caution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号