首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   124篇
  免费   6篇
管理学   15篇
民族学   1篇
人口学   2篇
丛书文集   1篇
理论方法论   7篇
综合类   1篇
社会学   27篇
统计学   76篇
  2022年   1篇
  2021年   2篇
  2020年   3篇
  2019年   7篇
  2018年   3篇
  2017年   7篇
  2016年   6篇
  2015年   4篇
  2014年   3篇
  2013年   16篇
  2012年   8篇
  2011年   7篇
  2010年   4篇
  2009年   4篇
  2008年   6篇
  2007年   9篇
  2006年   4篇
  2005年   5篇
  2004年   4篇
  2003年   8篇
  2002年   1篇
  2001年   4篇
  2000年   2篇
  1999年   4篇
  1998年   3篇
  1997年   2篇
  1992年   1篇
  1990年   1篇
  1980年   1篇
排序方式: 共有130条查询结果,搜索用时 15 毫秒
21.
The benefits of adjusting for baseline covariates are not as straightforward with repeated binary responses as with continuous response variables. Therefore, in this study, we compared different methods for analyzing repeated binary data through simulations when the outcome at the study endpoint is of interest. Methods compared included chi‐square, Fisher's exact test, covariate adjusted/unadjusted logistic regression (Adj.logit/Unadj.logit), covariate adjusted/unadjusted generalized estimating equations (Adj.GEE/Unadj.GEE), covariate adjusted/unadjusted generalized linear mixed model (Adj.GLMM/Unadj.GLMM). All these methods preserved the type I error close to the nominal level. Covariate adjusted methods improved power compared with the unadjusted methods because of the increased treatment effect estimates, especially when the correlation between the baseline and outcome was strong, even though there was an apparent increase in standard errors. Results of the Chi‐squared test were identical to those for the unadjusted logistic regression. Fisher's exact test was the most conservative test regarding the type I error rate and also with the lowest power. Without missing data, there was no gain in using a repeated measures approach over a simple logistic regression at the final time point. Analysis of results from five phase III diabetes trials of the same compound was consistent with the simulation findings. Therefore, covariate adjusted analysis is recommended for repeated binary data when the study endpoint is of interest. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
22.
This paper reports on empirical research into how press releases are being constructed. It starts from previous discourse-analytic work which has pointed to the 'preformulated' nature of press releases: in particular, it has been shown that through a number of metapragmatic features press releases can easily be copied by journalists in their own news reporting. In this paper we set out to subject one of these features, viz. pseudo-quotations (or so-called constructed direct speech), to a further empirical study, in which we scrutinize the process of constructing the press releases. We propose a detailed analysis of this process by combining ethnographic fieldwork with some of the methodology of cognitive psychology, including think-aloud protocols and on-line registration of the writing process. On the basis of this case study it is concluded that the design and functions of quotations in press releases are more complex than has been assumed so far. In addition, our preliminary results indicate that the combination of methods that we propose in this paper provides a sound starting point for both quantitative and qualitative analysis, allowing for a detailed analysis and interpretation of how press releases are being constructed.  相似文献   
23.
We analyse longitudinal data on CD4 cell counts from patients who participated in clinical trials that compared two therapeutic treatments: zidovudine and didanosine. The investigators were interested in modelling the CD4 cell count as a function of treatment, age at base-line and disease stage at base-line. Serious concerns can be raised about the normality assumption of CD4 cell counts that is implicit in many methods and therefore an analysis may have to start with a transformation. Instead of assuming that we know the transformation (e.g. logarithmic) that makes the outcome normal and linearly related to the covariates, we estimate the transformation, by using maximum likelihood, within the Box–Cox family. There has been considerable work on the Box–Cox transformation for univariate regression models. Here, we discuss the Box–Cox transformation for longitudinal regression models when the outcome can be missing over time, and we also implement a maximization method for the likelihood, assumming that the missing data are missing at random.  相似文献   
24.
Summary.  The moment method is a well-known astronomical mode identification technique in asteroseismology which uses a time series of the first three moments of a spectral line to estimate the discrete oscillation mode parameters l and m . The method, in contrast with many other mode identification techniques, also provides estimates of other important continuous parameters such as the inclination angle α and the rotational velocity v e. We developed a statistical formalism for the moment method based on so-called generalized estimating equations. This formalism allows an estimation of the uncertainty of the continuous parameters, taking into account that the different moments of a line profile are correlated and that the uncertainty of the observed moments also depends on the model parameters. Furthermore, we set up a procedure to take into account the mode uncertainty, i.e. the fact that often several modes ( l ,  m ) can adequately describe the data. We also introduce a new lack-of-fit function which works at least as well as a previous discriminant function, and which in addition allows us to identify the sign of the azimuthal order m . We applied our method to star HD181558 by using several numerical methods, from which we learned that numerically solving the estimating equations is an intensive task. We report on the numerical results, from which we gain insight in the statistical uncertainties of the physical parameters that are involved in the moment method.  相似文献   
25.
Summary.  Road safety has recently become a major concern in most modern societies. The identification of sites that are more dangerous than others (black spots) can help in better scheduling road safety policies. This paper proposes a methodology for ranking sites according to their level of hazard. The model is innovative in at least two respects. Firstly, it makes use of all relevant information per accident location, including the total number of accidents and the number of fatalities, as well as the number of slight and serious injuries. Secondly, the model includes the use of a cost function to rank the sites with respect to their total expected cost to society. Bayesian estimation for the model via a Markov chain Monte Carlo approach is proposed. Accident data from 519 intersections in Leuven (Belgium) are used to illustrate the methodology proposed. Furthermore, different cost functions are used to show the effect of the proposed method on the use of different costs per type of injury.  相似文献   
26.
The current study focused on the associations between drinking motives, alcohol expectancies, self-efficacy, and drinking behavior in a representative sample of 553 Dutch adolescents and adults. Data were gathered by means of self-report questionnaires and a 14-days drinking diary. A model was postulated in which negative expectancies and self-efficacy were directly associated with drinking, and in which drinking motives mediated the associations between positive expectancies, and drinking. The findings of multivariate analyses showed that drinking motives were related to general indicators of drinking and to drinking levels in specific situations. Furthermore, self-efficacy was moderately related to all drinking variables. Negative expectancies were related to general drinking variables but hardly to drinking in specific situations. Positive expectancies were hardly related to drinking in multivariate analyses and therefore mediation models could not be tested. No systematic moderator effects were apparent for age and gender on the associations between drinking motives, alcohol expectancies, self-efficacy, and drinking.  相似文献   
27.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
28.
In this contribution, we focus on the results ofthe Belgian Trend Study. The intention of this study wasto examine the prevalence of new production conceptswithin the widest possible range of companies in the automotive, the machine tool, thechemical, and the clothing industries. The Trend Studyaimed to answer the following questions: is theTaylorist division of labor a thing of the past? Whatare the alternatives? Are shifts in the division oflabor accompanied by another type of personnel policy,and do traditional industrial relations have to make wayfor this new approach? The methodological concept used had to guarantee that the findingsat the level of each industry could be generalized.Though the picture emerging from the empirical datacollected in the four industrial sectors is inevitablydiverse, the data make it possible merely to suggest aneo- rather than a post-Taylorist or -Fordistconcept.  相似文献   
29.
30.
This paper presents new identification conditions for the mixed proportional hazard model. In particular, the baseline hazard is assumed to be bounded away from 0 and ∞ near t = 0. These conditions ensure that the information matrix is nonsingular. The paper also presents an estimator for the mixed proportional hazard model that converges at rate N−1/2.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号