首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   102篇
  免费   1篇
管理学   19篇
人口学   3篇
丛书文集   2篇
理论方法论   5篇
综合类   10篇
社会学   11篇
统计学   53篇
  2021年   1篇
  2020年   5篇
  2019年   5篇
  2018年   5篇
  2017年   5篇
  2016年   4篇
  2015年   2篇
  2014年   5篇
  2013年   15篇
  2012年   3篇
  2011年   6篇
  2010年   6篇
  2009年   6篇
  2008年   1篇
  2007年   3篇
  2006年   5篇
  2005年   6篇
  2004年   5篇
  2003年   5篇
  2002年   3篇
  2001年   4篇
  1998年   2篇
  1995年   1篇
排序方式: 共有103条查询结果,搜索用时 62 毫秒
71.
This article proposes a fully nonparametric kernel method to account for observed covariates in regression discontinuity designs (RDD), which may increase precision of treatment effect estimation. It is shown that conditioning on covariates reduces the asymptotic variance and allows estimating the treatment effect at the rate of one-dimensional nonparametric regression, irrespective of the dimension of the continuously distributed elements in the conditioning set. Furthermore, the proposed method may decrease bias and restore identification by controlling for discontinuities in the covariate distribution at the discontinuity threshold, provided that all relevant discontinuously distributed variables are controlled for. To illustrate the estimation approach and its properties, we provide a simulation study and an empirical application to an Austrian labor market reform. Supplementary materials for this article are available online.  相似文献   
72.
This paper proposes a new conceptualisation of the construct of knowledge ambiguity. This new conceptualisation is essential because (1) past researchers have tended to narrowly define and operationalise knowledge ambiguity in terms of causal ambiguity or tacitness and (2) the prevalent non-comprehensive conceptualisation constrains our ability to overcome the problem of knowledge ambiguity. Knowledge ambiguity has been identified as a major obstacle to effective knowledge transfer and to the implementation of overall knowledge management systems. The new conceptualisation proposes that knowledge ambiguity is composed of two types of ambiguity: component ambiguity and causal ambiguity. Component ambiguity is uncertainty about knowledge content, whereas causal ambiguity is uncertainty about how to use the knowledge. This re-conceptualisation is supported by previous studies on knowledge characteristics, absorptive capacity and cognitive learning. In this paper, theoretical propositions are developed to demonstrate the compatibility of the new conceptualisation with the current understanding of these concepts. The present paper not only advances our understanding of knowledge ambiguity, it also points towards solutions for overcoming the problems associated with knowledge ambiguity. Different measures are required to overcome problems created by component ambiguity vs. causal ambiguity. This paper’s re-conceptualisation of knowledge ambiguity makes it easier to theorise about and operationalise the concept. It aligns the definition of knowledge ambiguity with current definitions of related constructs such as absorptive ambiguity and cognitive learning that are used in the broader knowledge transfer and knowledge management literatures.  相似文献   
73.
How should a network experiment be designed to achieve high statistical power? Experimental treatments on networks may spread. Randomizing assignment of treatment to nodes enhances learning about the counterfactual causal effects of a social network experiment and also requires new methodology (ex. Aronow and Samii, 2017a, Bowers et al., 2013, Toulis and Kao, 2013). In this paper we show that the way in which a treatment propagates across a social network affects the statistical power of an experimental design. As such, prior information regarding treatment propagation should be incorporated into the experimental design. Our findings justify reconsideration of standard practice in circumstances where units are presumed to be independent even in simple experiments: information about treatment effects is not maximized when we assign half the units to treatment and half to control. We also present an example in which statistical power depends on the extent to which the network degree of nodes is correlated with treatment assignment probability. We recommend that researchers think carefully about the underlying treatment propagation model motivating their study in designing an experiment on a network.  相似文献   
74.
我国外汇储备对通货膨胀影响的实证分析   总被引:13,自引:0,他引:13  
近来,通货膨胀趋势日显,外汇储备的大幅度增加与通货膨胀之间是否具有因果关联性成为一个值得探讨的问题。文章将二十世纪八九十年代的通货膨胀时期与此次通货膨胀时期作为两个时间段,运用格兰杰因果检验与相关分析的方法,来阐明在这两个不同时期,外汇储备与通货膨胀之间的关系,并探究其原因,给出相关的政策建议。  相似文献   
75.
An important problem in epidemiology and medical research is the estimation of the causal effect of a treatment action at a single point in time on the mean of an outcome, possibly within strata of the target population defined by a subset of the baseline covariates. Current approaches to this problem are based on marginal structural models, i.e. parametric models for the marginal distribution of counterfactual outcomes as a function of treatment and effect modifiers. The various estimators developed in this context furthermore each depend on a high-dimensional nuisance parameter whose estimation currently also relies on parametric models. Since misspecification of any of these models can lead to severely biased estimates of causal effects, the dependence of current methods on such parametric models represents a major limitation.  相似文献   
76.
The paper addresses a formal definition of a confounder based on the qualitative definition that is commonly used in standard epidemiology text-books. To derive the criterion for a factor to be a confounder given by Miettinen and Cook and to clarify inconsistency between various criteria for a confounder, we introduce the concepts of an irrelevant factor, an occasional confounder and a uniformly irrelevant factor. We discuss criteria for checking these and show that Miettinen and Cook's criterion can also be applied to occasional confounders. Moreover, we consider situations with multiple potential confounders, and we obtain two necessary conditions that are satisfied by each confounder set. None of the definitions and results presented in this paper require the ignorability and sufficient control confounding assumptions which are commonly employed in observational and epidemiological studies.  相似文献   
77.
Statistical analysis of performance indicators in UK higher education   总被引:2,自引:0,他引:2  
Summary.  Attempts to measure the quality with which institutions such as hospitals and universities carry out their public mandates have gained in frequency and sophistication over the last decade. We examine methods for creating performance indicators in multilevel or hierarchical settings (e.g. students nested within universities) based on a dichotomous outcome variable (e.g. drop-out from the higher education system). The profiling methods that we study involve the indirect measurement of quality, by comparing institutional outputs after adjusting for inputs, rather than directly attempting to measure the quality of the processes unfolding inside the institutions. In the context of an extended case-study of the creation of performance indicators for universities in the UK higher education system, we demonstrate the large sample functional equivalence between a method based on indirect standardization and an approach based on fixed effects hierarchical modelling, offer simulation results on the performance of the standardization method in null and non-null settings, examine the sensitivity of this method to the inadvertent omission of relevant adjustment variables, explore random-effects reformulations and characterize settings in which they are preferable to fixed effects hierarchical modelling in this type of quality assessment and discuss extensions to longitudinal quality modelling and the overall pros and cons of institutional profiling. Our results are couched in the language of higher education but apply with equal force to other settings with dichotomous response variables, such as the examination of observed and expected rates of mortality (or other adverse outcomes) in investigations of the quality of health care or the study of retention rates in the workplace.  相似文献   
78.
To learn about the progression of a complex disease, it is necessary to understand the physiology and function of many genes operating together in distinct interactions as a system. In order to significantly advance our understanding of the function of a system, we need to learn the causal relationships among its modeled genes. To this end, it is desirable to compare experiments of the system under complete interventions of some genes, e.g., knock-out of some genes, with experiments of the system without interventions. However, it is expensive and difficult (if not impossible) to conduct wet lab experiments of complete interventions of genes in animal models, e.g., a mouse model. Thus, it will be helpful if we can discover promising causal relationships among genes with observational data alone in order to identify promising genes to perturb in the system that can later be verified in wet laboratories. While causal Bayesian networks have been actively used in discovering gene pathways, most of the algorithms that discover pairwise causal relationships from observational data alone identify only a small number of significant pairwise causal relationships, even with a large dataset. In this article, we introduce new causal discovery algorithms—the Equivalence Local Implicit latent variable scoring Method (EquLIM) and EquLIM with Markov chain Monte Carlo search algorithm (EquLIM-MCMC)—that identify promising causal relationships even with a small observational dataset.  相似文献   
79.
We developed methods for estimating the causal risk difference and causal risk ratio in randomized trials with noncompliance. The developed estimator is unbiased under the assumption that biases due to noncompliance are identical between both treatment arms. The biases are defined as the difference or ratio between the expectations of potential outcomes for a group that received the test treatment and that for the control group in each randomly assigned group. Although the instrumental variable estimator yields an unbiased estimate under a sharp null hypothesis but may yield a biased estimate under a non-null hypothesis, the bias of the developed estimator does not depend on whether this hypothesis holds. Then the estimate of the causal effect from the developed estimator may have a smaller bias than that from the instrumental variable estimator when the treatment effect exists. There is not yet a standard method for coping with noncompliance, and thus it is important to evaluate estimates under different assumptions. The developed estimator can serve this purpose. Its application to a field trial for coronary heart disease is provided.  相似文献   
80.
Using a comprehensive simulation study based on empirical data, this article investigates the finite sample properties of different classes of parametric and semiparametric estimators of (natural) direct and indirect causal effects used in mediation analysis under sequential conditional independence assumptions. The estimators are based on regression, inverse probability weighting, and combinations thereof. Our simulation design uses a large population of Swiss jobseekers and considers variations of several features of the data-generating process (DGP) and the implementation of the estimators that are of practical relevance. We find that no estimator performs uniformly best (in terms of root mean squared error) in all simulations. Overall, so-called “g-computation” dominates. However, differences between estimators are often (but not always) minor in the various setups and the relative performance of the methods often (but not always) varies with the features of the DGP.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号