首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   541篇
  免费   11篇
  国内免费   3篇
管理学   37篇
人口学   1篇
丛书文集   31篇
理论方法论   18篇
综合类   202篇
社会学   9篇
统计学   257篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   3篇
  2020年   5篇
  2019年   11篇
  2018年   15篇
  2017年   12篇
  2016年   6篇
  2015年   6篇
  2014年   18篇
  2013年   120篇
  2012年   35篇
  2011年   19篇
  2010年   21篇
  2009年   28篇
  2008年   23篇
  2007年   33篇
  2006年   24篇
  2005年   25篇
  2004年   26篇
  2003年   13篇
  2002年   14篇
  2001年   21篇
  2000年   12篇
  1999年   11篇
  1998年   7篇
  1997年   6篇
  1996年   7篇
  1995年   2篇
  1994年   6篇
  1993年   3篇
  1992年   2篇
  1991年   1篇
  1990年   2篇
  1989年   1篇
  1987年   5篇
  1985年   1篇
  1984年   2篇
  1982年   2篇
  1981年   1篇
排序方式: 共有555条查询结果,搜索用时 15 毫秒
1.
This study investigates how individuals assess imprecise information. We focus on two essential dimensions of decision under uncertainty, outcomes and probabilities, and their respective precision. We believe the precision of information is highly relevant in the investment setting, as reflected in the well-known “home (familiarity) bias”, and the outcome and probability dimensions, separately or jointly, may affect investors’ knowledge of uncertainty and perceived risk of the investment options, and subsequently affect investors’ choices. To test this conjecture, we conducted three experiments. Our results show that 1) participants demonstrate a pattern of preference for precision and aversion of extreme vagueness and associate vagueness with higher perceived risk and lower investment (experiments one and two); 2) participants prefer vague outcome information to vague probability information (experiment two); 3) familiarity indeed positively affects the precision of estimated values, but this association is stronger for the outcome dimension than for probabilities (experiment three). Our results confirm that precision in information, especially in the outcome dimension has an impact on investors’ resource allocation choices.  相似文献   
2.
Multinomial logit (also termed multi-logit) models permit the analysis of the statistical relation between a categorical response variable and a set of explicative variables (called covariates or regressors). Although multinomial logit is widely used in both the social and economic sciences, the interpretation of regression coefficients may be tricky, as the effect of covariates on the probability distribution of the response variable is nonconstant and difficult to quantify. The ternary plots illustrated in this article aim at facilitating the interpretation of regression coefficients and permit the effect of covariates (either singularly or jointly considered) on the probability distribution of the dependent variable to be quantified. Ternary plots can be drawn both for ordered and for unordered categorical dependent variables, when the number of possible outcomes equals three (trinomial response variable); these plots allow not only to represent the covariate effects over the whole parameter space of the dependent variable but also to compare the covariate effects of any given individual profile. The method is illustrated and discussed through analysis of a dataset concerning the transition of master’s graduates of the University of Trento (Italy) from university to employment.  相似文献   
3.
Summary . A fairly general procedure is studied to perturb a multivariate density satisfying a weak form of multivariate symmetry, and to generate a whole set of non-symmetric densities. The approach is sufficiently general to encompass some recent proposals in the literature, variously related to the skew normal distribution. The special case of skew elliptical densities is examined in detail, establishing connections with existing similar work. The final part of the paper specializes further to a form of multivariate skew t -density. Likelihood inference for this distribution is examined, and it is illustrated with numerical examples.  相似文献   
4.
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis.  相似文献   
5.
Summary.  In New Testament studies, the synoptic problem is concerned with the relationships between the gospels of Matthew, Mark and Luke. In an earlier paper a careful specification in probabilistic terms was set up of Honoré's triple-link model. In the present paper, a modification of Honoré's model is proposed. As previously, counts of the numbers of verbal agreements between the gospels are examined to investigate which of the possible triple-link models appears to give the best fit to the data, but now using the modified version of the model and additional sets of data.  相似文献   
6.
Longitudinal categorical data are commonly applied in a variety of fields and are frequently analyzed by generalized estimating equation (GEE) method. Prior to making further inference based on the GEE model, the assessment of model fit is crucial. Graphical techniques have long been in widespread use for assessing the model adequacy. We develop alternative graphical approaches utilizing plots of marginal model-checking condition and local mean deviance to assess the GEE model with logit link for longitudinal binary responses. The applications of the proposed procedures are illustrated through two longitudinal binary datasets.  相似文献   
7.
8.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   
9.
In analyzing data from unreplicated factorial designs, the half-normal probability plot is commonly used to screen for the ‘vital few’ effects. Recently, many formal methods have been proposed to overcome the subjectivity of this plot. Lawson (1998) (hereafter denoted as LGB) suggested a hybrid method based on the half-normal probability plot, which is a blend of Lenth (1989) and Loh (1992) method. The method consists of fitting a simple least squares line to the inliers, which are determined by the Lenth method. The effects exceeding the prediction limits based on the fitted line are candidates for the vital few effects. To improve the accuracy of partitioning the effects into inliers and outliers, we propose a modified LGB method (hereafter denoted as the Mod_LGB method), in which more outliers can be classified by using both the Carling’s modification of the box plot (Carling, 2000) and Lenth method. If no outlier exists or there is a wide range in the inliers as determined by the Lenth method, more outliers can be found by the Carling method. A simulation study is conducted in unreplicated 24 designs with the number of active effects ranging from 1 to 6 to compare the efficiency of the Lenth method, original LGB methods, and the proposed modified version of the LGB method.  相似文献   
10.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号