首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   584篇
  免费   16篇
管理学   107篇
民族学   4篇
人口学   28篇
丛书文集   1篇
理论方法论   78篇
综合类   5篇
社会学   281篇
统计学   96篇
  2023年   6篇
  2022年   7篇
  2021年   5篇
  2020年   15篇
  2019年   22篇
  2018年   16篇
  2017年   26篇
  2016年   19篇
  2015年   20篇
  2014年   25篇
  2013年   70篇
  2012年   33篇
  2011年   26篇
  2010年   27篇
  2009年   19篇
  2008年   29篇
  2007年   27篇
  2006年   20篇
  2005年   27篇
  2004年   22篇
  2003年   26篇
  2002年   14篇
  2001年   13篇
  2000年   8篇
  1999年   6篇
  1998年   8篇
  1997年   3篇
  1996年   5篇
  1995年   3篇
  1994年   7篇
  1993年   2篇
  1992年   5篇
  1991年   3篇
  1990年   5篇
  1989年   4篇
  1988年   3篇
  1987年   3篇
  1986年   3篇
  1985年   3篇
  1984年   5篇
  1983年   4篇
  1982年   1篇
  1980年   2篇
  1979年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有600条查询结果,搜索用时 0 毫秒
101.
Due to intensive marketing and the rapid growth of online gambling, poker currently enjoys great popularity among large sections of the population. Although poker is legally a game of chance in most countries, some (particularly operators of private poker web sites) argue that it should be regarded as a game of skill or sport because the outcome of the game primarily depends on individual aptitude and skill. The available findings indicate that skill plays a meaningful role; however, serious methodological weaknesses and the absence of reliable information regarding the relative importance of chance and skill considerably limit the validity of extant research. Adopting a quasi-experimental approach, the present study examined the extent to which the influence of poker playing skill was more important than card distribution. Three average players and three experts sat down at a six-player table and played 60 computer-based hands of the poker variant “Texas Hold’em” for money. In each hand, one of the average players and one expert received (a) better-than-average cards (winner’s box), (b) average cards (neutral box) and (c) worse-than-average cards (loser’s box). The standardized manipulation of the card distribution controlled the factor of chance to determine differences in performance between the average and expert groups. Overall, 150 individuals participated in a “fixed-limit” game variant, and 150 individuals participated in a “no-limit” game variant. ANOVA results showed that experts did not outperform average players in terms of final cash balance. Rather, card distribution was the decisive factor for successful poker playing. However, expert players were better able to minimize losses when confronted with disadvantageous conditions (i.e., worse-than-average cards). No significant differences were observed between the game variants. Furthermore, supplementary analyses confirm differential game-related actions dependent on the card distribution, player status, and game variant. In conclusion, the study findings indicate that poker should be regarded as a game of chance, at least under certain basic conditions, and suggest new directions for further research.  相似文献   
102.
To examine key parameters of the initial conditions in early category learning, two studies compared 5‐month‐olds’ object categorization between tasks involving previously unseen novel objects, and between measures within tasks. Infants in Experiment 1 participated in a visual familiarization–novelty preference (VFNP) task with two‐dimensional (2D) stimulus images. Infants provided no evidence of categorization by either their looking or their examining even though infants in previous research systematically categorized the same objects by examining when they could handle them directly. Infants in Experiment 2 participated in a VFNP task with 3D stimulus objects that allowed visual examination of objects’ 3D instantiation while denying manual contact with the objects. Under these conditions, infants demonstrated categorization by examining but not by looking. Focused examination appears to be a key component of young infants’ ability to form category representations of novel objects, and 3D instantiation appears to better engage such examining.  相似文献   
103.
104.
The authors consider Bayesian analysis for continuous‐time Markov chain models based on a conditional reference prior. For such models, inference of the elapsed time between chain observations depends heavily on the rate of decay of the prior as the elapsed time increases. Moreover, improper priors on the elapsed time may lead to improper posterior distributions. In addition, an infinitesimal rate matrix also characterizes this class of models. Experts often have good prior knowledge about the parameters of this matrix. The authors show that the use of a proper prior for the rate matrix parameters together with the conditional reference prior for the elapsed time yields a proper posterior distribution. The authors also demonstrate that, when compared to analyses based on priors previously proposed in the literature, a Bayesian analysis on the elapsed time based on the conditional reference prior possesses better frequentist properties. The type of prior thus represents a better default prior choice for estimation software.  相似文献   
105.
Quantifying uncertainty in the biospheric carbon flux for England and Wales   总被引:1,自引:0,他引:1  
Summary.  A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO2) concentrations in the atmosphere. Vegetation can extract CO2 through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 1012 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered.  相似文献   
106.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   
107.

We address the testing problem of proportional hazards in the two-sample survival setting allowing right censoring, i.e., we check whether the famous Cox model is underlying. Although there are many test proposals for this problem, only a few papers suggest how to improve the performance for small sample sizes. In this paper, we do exactly this by carrying out our test as a permutation as well as a wild bootstrap test. The asymptotic properties of our test, namely asymptotic exactness under the null and consistency, can be transferred to both resampling versions. Various simulations for small sample sizes reveal an actual improvement of the empirical size and a reasonable power performance when using the resampling versions. Moreover, the resampling tests perform better than the existing tests of Gill and Schumacher and Grambsch and Therneau . The tests’ practical applicability is illustrated by discussing real data examples.

  相似文献   
108.
We propose a new methodology for maximum likelihood estimation in mixtures of non linear mixed effects models (NLMEM). Such mixtures of models include mixtures of distributions, mixtures of structural models and mixtures of residual error models. Since the individual parameters inside the NLMEM are not observed, we propose to combine the EM algorithm usually used for mixtures models when the mixture structure concerns an observed variable, with the Stochastic Approximation EM (SAEM) algorithm, which is known to be suitable for maximum likelihood estimation in NLMEM and also has nice theoretical properties. The main advantage of this hybrid procedure is to avoid a simulation step of unknown group labels required by a “full” version of SAEM. The resulting MSAEM (Mixture SAEM) algorithm is now implemented in the Monolix software. Several criteria for classification of subjects and estimation of individual parameters are also proposed. Numerical experiments on simulated data show that MSAEM performs well in a general framework of mixtures of NLMEM. Indeed, MSAEM provides an estimator close to the maximum likelihood estimator in very few iterations and is robust with regard to initialization. An application to pharmacokinetic (PK) data demonstrates the potential of the method for practical applications.  相似文献   
109.
Building on Kihlstrom and Mirman (Journal of Economic Theory, 8(3), 361–388, 1974)’s formulation of risk aversion in the case of multidimensional utility functions, we study the effect of risk aversion on optimal behavior in a general consumer’s maximization problem under uncertainty. We completely characterize the relationship between changes in risk aversion and classical demand theory. We show that the effect of risk aversion on optimal behavior depends on the income and substitution effects. Moreover, the effect of risk aversion is determined not by the riskiness of the risky good, but rather the riskiness of the utility gamble associated with each decision.  相似文献   
110.
Luy M 《Demography》2012,49(2):607-627
In general, the use of indirect methods is limited to developing countries. Developed countries are usually assumed to have no need to apply such methods because detailed demographic data exist. However, the potentialities of demographic analysis with direct methods are limited to the characteristics of available macro data on births, deaths, and migration. For instance, in many Western countries, official population statistics do not permit the estimation of mortality by socioeconomic status (SES) or migration background, or for estimating the relationship between parity and mortality. In order to overcome these shortcomings, I modify and extend the so-called orphanhood method for indirect estimation of adult mortality from survey information on maternal and paternal survival to allow its application to populations of developed countries. The method is demonstrated and tested with data from two independent Italian cross-sectional surveys by estimating overall and SES-specific life expectancy. The empirical applications reveal that the proposed method can be used successfully for estimating levels and trends of mortality differences in developed countries and thus offers new prospects for the analysis of mortality.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号