全文获取类型
收费全文 | 131篇 |
免费 | 7篇 |
专业分类
管理学 | 33篇 |
民族学 | 1篇 |
人口学 | 9篇 |
理论方法论 | 12篇 |
社会学 | 47篇 |
统计学 | 36篇 |
出版年
2023年 | 4篇 |
2022年 | 4篇 |
2021年 | 2篇 |
2020年 | 7篇 |
2019年 | 7篇 |
2018年 | 12篇 |
2017年 | 6篇 |
2016年 | 13篇 |
2015年 | 4篇 |
2014年 | 3篇 |
2013年 | 16篇 |
2012年 | 5篇 |
2011年 | 4篇 |
2010年 | 8篇 |
2009年 | 6篇 |
2008年 | 4篇 |
2007年 | 3篇 |
2006年 | 2篇 |
2005年 | 3篇 |
2003年 | 2篇 |
2002年 | 2篇 |
2001年 | 4篇 |
2000年 | 3篇 |
1999年 | 1篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 3篇 |
1994年 | 1篇 |
1986年 | 1篇 |
1979年 | 1篇 |
排序方式: 共有138条查询结果,搜索用时 15 毫秒
51.
Descroix Emmanuel Świątkowski Wojciech Graff Christian 《Journal of Nonverbal Behavior》2022,46(1):19-44
Journal of Nonverbal Behavior - Why do eye-blinks activate during conversation? We manipulated informational content and communicative intent exchanged within dyads. By comparison to a silent... 相似文献
52.
Frank A. Cowell Emmanuel Flachaire Sanghamitra Bandyopadhyay 《Journal of Economic Inequality》2013,11(4):421-437
We investigate a general problem of comparing pairs of distributions which includes approaches to inequality measurement, the evaluation of “unfair” income inequality, evaluation of inequality relative to norm incomes, and goodness of fit. We show how to represent the generic problem simply using (1) a class of divergence measures derived from a parsimonious set of axioms and (2) alternative types of “reference distributions.” The problems of appropriate statistical implementation are discussed and empirical illustrations of the technique are provided using a variety of reference distributions. 相似文献
53.
This paper tests whether individual perceptions of markets as good or bad for a public good is correlated with the propensity to report gaps in willingness to pay and willingness to accept revealed within an incentive compatible mechanism. Identifying people based on a notion of market affinity, we find a substantial part of the gap can be explained by controlling for some variables that were not controlled for before. This result suggests the valuation gap for public goods can be reduced through well-defined variables. 相似文献
54.
Non‐random sampling is a source of bias in empirical research. It is common for the outcomes of interest (e.g. wage distribution) to be skewed in the source population. Sometimes, the outcomes are further subjected to sample selection, which is a type of missing data, resulting in partial observability. Thus, methods based on complete cases for skew data are inadequate for the analysis of such data and a general sample selection model is required. Heckman proposed a full maximum likelihood estimation method under the normality assumption for sample selection problems, and parametric and non‐parametric extensions have been proposed. We generalize Heckman selection model to allow for underlying skew‐normal distributions. Finite‐sample performance of the maximum likelihood estimator of the model is studied via simulation. Applications illustrate the strength of the model in capturing spurious skewness in bounded scores, and in modelling data where logarithm transformation could not mitigate the effect of inherent skewness in the outcome variable. 相似文献
55.
In this article, the Brier score is used to investigate the importance of clustering for the frailty survival model. For this purpose, two versions of the Brier score are constructed, i.e., a “conditional Brier score” and a “marginal Brier score.” Both versions of the Brier score show how the clustering effects and the covariate effects affect the predictive ability of the frailty model separately. Using a Bayesian and a likelihood approach, point estimates and 95% credible/confidence intervals are computed. The estimation properties of both procedures are evaluated in an extensive simulation study for both versions of the Brier score. Further, a validation strategy is developed to calculate an internally validated point estimate and credible/confidence interval. The ensemble of the developments is applied to a dental dataset. 相似文献
56.
David Dejardin Emmanuel Lesaffre Paul Hamberg Jaap Verweij 《Pharmaceutical statistics》2014,13(3):196-207
Nowadays, treatment regimens for cancer often involve a combination of drugs. The determination of the doses of each of the combined drugs in phase I dose escalation studies poses methodological challenges. The most common phase I design, the classic ‘3+3' design, has been criticized for poorly estimating the maximum tolerated dose (MTD) and for treating too many subjects at doses below the MTD. In addition, the classic ‘3+3' is not able to address the challenges posed by combinations of drugs. Here, we assume that a control drug (commonly used and well‐studied) is administered at a fixed dose in combination with a new agent (the experimental drug) of which the appropriate dose has to be determined. We propose a randomized design in which subjects are assigned to the control or to the combination of the control and experimental. The MTD is determined using a model‐based Bayesian technique based on the difference of probability of dose limiting toxicities (DLT) between the control and the combination arm. We show, through a simulation study, that this approach provides better and more accurate estimates of the MTD. We argue that this approach may differentiate between an extreme high probability of DLT observed from the control and a high probability of DLT of the combination. We also report on a fictive (simulation) analysis based on published data of a phase I trial of ifosfamide combined with sunitinib.Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
57.
Samuel M. Mwalili Emmanuel Lesaffre Dominique Declerck 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(1):77-93
Summary. We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis. 相似文献
58.
AbstractMotivated by a recent article published by Adam and Tawn, we characterize the distribution of two random variables X, Y ordered linearly like X < Y. We suppose that the random variables follow a bivariate extreme value distribution. 相似文献
59.
Too Big to Learn: The Effects of Major Acquisition Failures on Subsequent Acquisition Divestment 下载免费PDF全文
We examine whether firms learn from their major acquisition failures. Drawing from a threat‐rigidity theoretical framework, we suggest that firms do not learn from their major acquisition failures. Furthermore, we hypothesize that host‐country experience reinforces the negative effects of major acquisition failures. Our research hypotheses are tested using an event history analysis of 741 acquisitions undertaken by French listed and non‐listed firms in the USA between January 1988 and December 2008. We use failure divestment (divestment resulting from acquisition failure) as a proxy for acquisition performance. Consistent with our theoretical framework, we find that major acquisition failures have a negative impact on future acquisition performance. Furthermore, we find that such negative effects are reinforced by firms’ host‐country experience. 相似文献
60.
Bruno Delafont Kevin Carroll Claire Vilain Emmanuel Pham 《Pharmaceutical statistics》2018,17(5):515-526
The longitudinal data from 2 published clinical trials in adult subjects with upper limb spasticity (a randomized placebo‐controlled study [NCT01313299] and its long‐term open‐label extension [NCT01313312]) were combined. Their study designs involved repeat intramuscular injections of abobotulinumtoxinA (Dysport®), and efficacy endpoints were collected accordingly. With the objective of characterizing the pattern of response across cycles, Mixed Model Repeated Measures analyses and Non‐Linear Random Coefficient (NLRC) analyses were performed and their results compared. The Mixed Model Repeated Measures analyses, commonly used in the context of repeated measures with missing dependent data, did not involve any parametric shape for the curve of changes over time. Based on clinical expectations, the NLRC included a negative exponential function of the number of treatment cycles, with its asymptote and rate included as random coefficients in the model. Our analysis focused on 2 specific efficacy parameters reflecting complementary aspects of efficacy in the study population. A simulation study based on a similar study design was also performed to further assess the performance of each method under different patterns of response over time. This highlighted a gain of precision with the NLRC model, and most importantly the need for its assumptions to be verified to avoid potentially biased estimates. These analyses describe a typical situation and the conditions under which non‐linear mixed modeling can provide additional insights on the behavior of efficacy parameters over time. Indeed, the resulting estimates from the negative exponential NLRC can help determine the expected maximal effect and the treatment duration required to reach it. 相似文献