首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   110篇
  免费   4篇
管理学   12篇
民族学   1篇
人口学   19篇
理论方法论   7篇
综合类   3篇
社会学   38篇
统计学   34篇
  2023年   2篇
  2022年   4篇
  2021年   1篇
  2020年   1篇
  2019年   7篇
  2018年   8篇
  2017年   9篇
  2016年   6篇
  2015年   5篇
  2014年   4篇
  2013年   9篇
  2012年   11篇
  2011年   6篇
  2010年   4篇
  2009年   4篇
  2008年   3篇
  2007年   1篇
  2006年   5篇
  2005年   3篇
  2004年   7篇
  2002年   3篇
  2001年   2篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   2篇
  1994年   1篇
  1992年   1篇
  1980年   1篇
排序方式: 共有114条查询结果,搜索用时 15 毫秒
11.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   
12.
Bayesian networks for imputation   总被引:1,自引:0,他引:1  
Summary.  Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census.  相似文献   
13.
The paper concerns the design of nonparametric low-pass filters that have the property of reproducing a polynomial of a given degree. Two approaches are considered. The first is locally weighted polynomial regression (LWPR), which leads to linear filters depending on three parameters: the bandwidth, the order of the fitting polynomial, and the kernel. We find a remarkable linear (hyperbolic) relationship between the cut-off period (frequency) and the bandwidth, conditional on the choices of the order and the kernel, upon which we build the design of a low-pass filter.The second hinges on a generalization of the maximum concentration approach, leading to filters related to discrete prolate spheroidal sequences (DPSS). In particular, we propose a new class of low-pass filters that maximize the concentration over a specified frequency range, subject to polynomial reproducing constraints. The design of generalized DPSS filters depends on three parameters: the bandwidth, the polynomial order, and the concentration frequency. We discuss the properties of the corresponding filters in relation to the LWPR filters, and illustrate their use for the design of low-pass filters by investigating how the three parameters are related to the cut-off frequency.  相似文献   
14.
In this paper an extension of tree-structured methodology to cover censored survival analysis is discussed. Tree-based methods (also called recursive partitioning) provide a useful alternative to the classical survival data analysis techniques, such as the semi-parametric model of Cox, whenever the main purpose is defining groups of individuals, either with complete or censored life history, having different survival probability, based on the values of selected covariates. The essential feature of recursive partitioning is the construction of a decision rule in the form of a binary tree. Trees generally require fewer assumptions than classical methods and handle non standard and non linear data structures efficiently. Tree-growing methods make the processes of covariate selection and grouping of categories in event history models explicit. An example concerning the analysis of time to marriage of Italian women is presented.  相似文献   
15.
In non-experimental research, data on the same population process may be collected simultaneously by more than one instrument. For example, in the present application, two sample surveys and a population birth registration system all collect observations on first births by age and year, while the two surveys additionally collect information on women’s education. To make maximum use of the three data sources, the survey data are pooled and the population data introduced as constraints in a logistic regression equation. Reductions in standard errors about the age and birth-cohort parameters of the regression equation in the order of three-quarters are obtained by introducing the population data as constraints. A halving of the standard errors about the education parameters is achieved by pooling observations from the larger survey dataset with those from the smaller survey. The percentage reduction in the standard errors through imposing population constraints is independent of the total survey sample size.  相似文献   
16.
The proliferation of gambling opportunities in Canada, coupled with an aging population, has led to an increased prevalence of gambling among older adults. Encouraged by this trend, gambling industries have modified their activities to attract and market to this group. Yet, older adults are not a homogeneous group. The life experiences, values, and attitudes shared by generations make a cohort-specific analysis of gambling among older adults a worthwhile pursuit. Drawing from the Dualistic Model of Passion (Vallerand et al. in J Pers Soc Psychol 85(4):756–767, 2003), we discuss the role of passion in shaping gambling behaviours, and the implications of a harmonious or obsessive passion on the benefits and risks to two distinct generations of older adults. Based on their generational attributes, we posit that members of the Silent Generation (those born between 1925 and 1942) stand to gain more from the benefits of recreational gambling, but also stand lose more from problem gambling, than their children’s generation, the Baby Boomers (those born between 1942 and 1964). Preventative strategies to assist problem gambling seniors, along with recommendations for further research, are discussed.  相似文献   
17.
The central idea of Disappointment theory is that an individual forms an expectation about a risky alternative, and may experience disappointment if the outcome eventually obtained falls short of the expectation. We abandon the hypothesis of a well-defined prior expectation: disappointment feelings may arise from comparing the outcome received with anyof the gamble’s outcomes that the individual failed to get. This leads to a new, general form of Disappointment model. It encompasses Rank Dependent Utility with an explicit one-parameter probability transformation, and Risk-Value models with a generic risk measure including Variance, providing a unifying behavioral foundation for these models. JEL Classification D80 . D81  相似文献   
18.
This paper introduces the Ergonomic Work Analysis as a relevant instrument to identify the risks in occupational environments through the investigation of factors that influence the relationship between the worker and the productive process. It draws a parallel between the several aspects of risk identification in traditional tools of Health and Safety Management and the factors embraced by the Ergonomic Work Analysis, showing that the ergonomic methodology is able to go deeper in the scenarios of possible incident causes. This deepening enables the establishment of a relationship between the work context and the upcoming damage to the physical integrity of the worker. It acts as a complementary instrument in the traditional approach to the risk management. In order to explain the application of this methodology in a preventive way, it is presented a case study of a coal mill inspector in a siderurgic company.  相似文献   
19.
This paper focuses on two main issues which are crucial for improving on the analysis of multidimensional inequality: the effect of both the dispersion of well-being attributes across individuals and the interaction among attributes on the measurement of multidimensional well-being. To approach these distributional questions we rely on the Atkinson, Kolm, Sen (hereafter AKS) methodology, which defines a multidimensional inequality index consistent with the Pigou–Dalton principle. This index can be decomposed into univariate indexes belonging to the class of AKS indexes, and a residual term accounting for the interaction across dimensions. The empirical application investigates the evolution of inequality in well-being across some EU countries between 1994 and 2001.Since the multidimensional index depends on the values assigned to the parameters, we test the sensitivity of the trend in well-being to the degree of inequality aversion on each dimension. Our empirical results summarize the evolution of inequality for the indicators of well-being considered both separately and jointly, over time and across countries.  相似文献   
20.
Income share elasticity is a function π which can describe the size distribution of income (Esteban in Intern Econ Rev 27:439–444, 1986). On the other hand, the conventional density representation of the latter gives parameters of first or second order stochastic dominance (SD), widely used to describe shifts in income distribution, to which inequality measures are attached. The paper draws a link between the two, by providing conditions such that a given shift to π is equivalent to a first or second order SD shift of the distribution of income. Some applications to Lorenz rankings are also provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号