首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   266篇
  免费   12篇
管理学   53篇
民族学   7篇
人口学   34篇
丛书文集   1篇
理论方法论   30篇
综合类   2篇
社会学   109篇
统计学   42篇
  2023年   3篇
  2022年   2篇
  2021年   2篇
  2020年   6篇
  2019年   17篇
  2018年   12篇
  2017年   17篇
  2016年   15篇
  2015年   7篇
  2014年   6篇
  2013年   43篇
  2012年   11篇
  2011年   15篇
  2010年   9篇
  2009年   6篇
  2008年   8篇
  2007年   12篇
  2006年   14篇
  2005年   5篇
  2004年   9篇
  2003年   5篇
  2002年   5篇
  2001年   7篇
  2000年   3篇
  1999年   5篇
  1998年   2篇
  1997年   3篇
  1996年   2篇
  1995年   2篇
  1994年   4篇
  1992年   3篇
  1991年   4篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1984年   2篇
  1983年   1篇
  1982年   1篇
  1977年   1篇
  1976年   1篇
  1973年   1篇
  1971年   1篇
  1967年   1篇
排序方式: 共有278条查询结果,搜索用时 835 毫秒
1.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
2.
We critically review conceptual and empirical issues surrounding the derivation of the international poverty line, expressed in PPP-adjusted dollars and linked to various rounds of the International Comparison Program (ICP). We find that there are some limitations in the current estimation of these lines, but show that statistically superior methods lead to lines that are relatively robust and confirm the $1.25 using 2005 PPPs and suggest $1.67–1.71 using 2011 PPPs (or close to the $1.90 proposed by the World Bank if we follow the World Bank’s approach of adjusting inflation rates in some countries); they also roughly confirm the current shape of the proposed ‘weakly relative’ poverty line. Using the new absolute line based on 2011 PPPs would lead to substantially lower poverty in our estimation. The extent of the decline depends on whether and how one treats China, India, and Indonesia differently from other countries in the 2005 and 2011 PPPs. More seriously, we note that the dependence on successive ICP rounds creates conceptual and empirical problems that have become worse over time so that we suggest that it would be best to consider alternatives to the current reliance on ICP rounds and the resulting PPPs. As a short-term solution we propose to fix the international poverty line in national currencies using either the 2005 or 2011 level; in the medium term, we argue for global poverty measurement based on internationally coordinated national poverty measurement.  相似文献   
3.
This article develops two block bootstrap-based panel predictability test procedures that are valid under very general conditions. Some of the allowable features include cross-sectional dependence, heterogeneous predictive slopes, persistent predictors, and complex error dynamics, including cross-unit endogeneity. While the first test procedure tests if there is any predictability at all, the second procedure determines the units for which predictability holds in case of a rejection by the first. A weak unit root framework is adopted to allow persistent predictors, and a novel theory is developed to establish asymptotic validity of the proposed bootstrap. Simulations are used to evaluate the performance of our tests in small samples, and their implementation is illustrated through an empirical application to stock returns.  相似文献   
4.
Retail business development is a broad goal for both private business interests as well as local policymakers, yet the goal of retail opportunities for local residents themselves is often seen as secondary. This paper considers the argument that retail opportunities and sense of community are in fact linked in important ways, links that reinforce the social fabric of a community and/or neighborhood. The paper first briefly reviews the inherent linkages between retail shopping and local development patterns, and then considers the sense of community in the context of Garfield County in western Colorado. Based on the key questions derived from this background, we formally test the inter-relationship between local retail spending and sense of community from detailed survey data, then more broadly consider the factors that critically shape a locality's “sense of community.” These findings shape several important policy implications.  相似文献   
5.
Mark J. Kaiser 《Risk analysis》2015,35(8):1562-1590
Public companies in the United States are required to report standardized values of their proved reserves and asset retirement obligations on an annual basis. When compared, these two measures provide an aggregate indicator of corporate decommissioning risk but, because of their consolidated nature, cannot readily be decomposed at a more granular level. The purpose of this article is to introduce a decommissioning risk metric defined in terms of the ratio of the expected value of an asset's reserves to its expected cost of decommissioning. Asset decommissioning risk (ADR) is more difficult to compute than a consolidated corporate risk measure, but can be used to quantify the decommissioning risk of structures and to perform regional comparisons, and also provides market signals of future decommissioning activity. We formalize two risk metrics for decommissioning and apply the ADR metric to the deepwater Gulf of Mexico (GOM) floater inventory. Deepwater oil and gas structures are expensive to construct, and at the end of their useful life, will be expensive to decommission. The value of proved reserves for the 42 floating structures in the GOM circa January 2013 is estimated to range between $37 and $80 billion for future oil prices between 60 and 120 $/bbl, which is about 10 to 20 times greater than the estimated $4.3 billion to decommission the inventory. Eni's Allegheny and MC Offshore's Jolliet tension leg platforms have ADR metrics less than one and are approaching the end of their useful life. Application of the proposed metrics in the regulatory review of supplemental bonding requirements in the U.S. Outer Continental Shelf is suggested to complement the current suite of financial metrics employed.  相似文献   
6.
This study seeks to extend the body of knowledge of pro-social behavior in comparative market settings by reporting on a high-stakes ultimatum game and revelation game experiments in two transition economies: Kazakhstan and Uzbekistan. While controlling for cultural differences and framing effects, we find statistically significant differences in fairness and honesty behavior between the two countries. Specifically, subjects in Uzbekistan (in an earlier stage of transition to a market economy) are fairer and more honest than their later-stage Kazakh counterparts. Our experimental findings have implications for the literature on pro-social behavior and market economies, and more generally, on the transmission process between formal and informal institutions.  相似文献   
7.
Beyond Agency   总被引:1,自引:0,他引:1  
The reason why agency/structure and micro/macro debates remain unresolved is the bad essentialist habit of treating such pairs as opposite natural kinds. Once variation is allowed, agency and structure, or micro and macro, are temporary poles bracketing a continuum, with social entities moving along this continuum over time. Explaining these transformations from agency into structure, or micro into macro, and vice versa is the challenge for explanatory theory. This challenge is met by switching to a constructivist level of second-order observing. Then, agency and structure become variable devices or frames different observers might use to perform different sorts of cultural work.  相似文献   
8.
We propose two preprocessing algorithms suitable for climate time series. The first algorithm detects outliers based on an autoregressive cost update mechanism. The second one is based on the wavelet transform, a method from pattern recognition. In order to benchmark the algorithms'' performance we compare them to existing methods based on a synthetic data set. Eventually, for exemplary purposes, the proposed methods are applied to a data set of high-frequent temperature measurements from Novi Sad, Serbia. The results show that both methods together form a powerful tool for signal preprocessing: In case of solitary outliers the autoregressive cost update mechanism prevails, whereas the wavelet-based mechanism is the method of choice in the presence of multiple consecutive outliers.  相似文献   
9.
We propose novel parametric concentric multi‐unimodal small‐subsphere families of densities for p ? 1 ≥ 2‐dimensional spherical data. Their parameters describe a common axis for K small hypersubspheres, an array of K directional modes, one mode for each subsphere, and K pairs of concentrations parameters, each pair governing horizontal (within the subsphere) and vertical (orthogonal to the subsphere) concentrations. We introduce two kinds of distributions. In its one‐subsphere version, the first kind coincides with a special case of the Fisher–Bingham distribution, and the second kind is a novel adaption that models independent horizontal and vertical variations. In its multisubsphere version, the second kind allows for a correlation of horizontal variation over different subspheres. In medical imaging, the situation of p ? 1 = 2 occurs precisely in modeling the variation of a skeletally represented organ shape due to rotation, twisting, and bending. For both kinds, we provide new computationally feasible algorithms for simulation and estimation and propose several tests. To the best knowledge of the authors, our proposed models are the first to treat the variation of directional data along several concentric small hypersubspheres, concentrated near modes on each subsphere, let alone horizontal dependence. Using several simulations, we show that our methods are more powerful than a recent nonparametric method and ad hoc methods. Using data from medical imaging, we demonstrate the advantage of our method and infer on the dominating axis of rotation of the human knee joint at different walking phases.  相似文献   
10.
Scholars, educators, regulators, pundits, and other observers are advocating for regulation and oversight of direct-to-consumer (DTC) genomic testing. As a result, the technology has been subject of highly visible public and regulatory controversy. In this article, we explore the nature and the shape of the sentiment of public discourse about the DTC company, 23andMe. We conduct a quantitative content analysis and qualitative framing analysis on Tweets. We find that the discourse surrounding DTC genomics and 23andMe is largely positive. We also identify a number of frames users deploy to debate, discuss, and share their experiences with DTC genomics and 23andMe. We argue that these frames create meaning around this emerging technology for its users.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号