首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1766篇
  免费   78篇
  国内免费   8篇
管理学   78篇
民族学   39篇
人口学   33篇
丛书文集   110篇
理论方法论   93篇
综合类   1005篇
社会学   75篇
统计学   419篇
  2024年   7篇
  2023年   17篇
  2022年   24篇
  2021年   27篇
  2020年   42篇
  2019年   39篇
  2018年   32篇
  2017年   39篇
  2016年   42篇
  2015年   45篇
  2014年   83篇
  2013年   200篇
  2012年   135篇
  2011年   116篇
  2010年   94篇
  2009年   103篇
  2008年   96篇
  2007年   102篇
  2006年   90篇
  2005年   92篇
  2004年   67篇
  2003年   48篇
  2002年   67篇
  2001年   54篇
  2000年   34篇
  1999年   20篇
  1998年   17篇
  1997年   17篇
  1996年   21篇
  1995年   14篇
  1994年   25篇
  1993年   5篇
  1992年   7篇
  1991年   4篇
  1990年   7篇
  1989年   4篇
  1988年   6篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1980年   1篇
排序方式: 共有1852条查询结果,搜索用时 15 毫秒
131.
Many violations of the independence axiom of expected utility can be traced to subjects' attraction to risk‐free prospects. The key axiom in this paper, negative certainty independence ([Dillenberger, 2010]), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy negative certainty independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well defined sense. We show that our representation can also be derived from a “cautious” completion of an incomplete preference relation.  相似文献   
132.
As service failures are inevitable, firms must be prepared to recover and learn from service failures. Yet, the majority of customers are still dissatisfied with the way firms resolve their complaints. Can learning to reduce service failures reduce customer dissatisfaction, and to what extent are such reductions sustainable? Previous research showed that organizational learning curves for customer dissatisfaction (i) follow a U‐shaped function of operating experience and (ii) are heterogeneous across firms. In this paper, I tease out where the U‐shaped learning‐curve effect and learning‐curve heterogeneity originate: service failure or customers' propensity to complain with a third party given the occurrence of a service failure. Using quarterly data for nine major US airlines over 11 years, I find that the U‐shaped learning‐curve effect and the learning‐curve heterogeneity originate in the propensity to complain. In the long term, reductions in service failure did not translate in sustainable reductions in customer dissatisfaction. Customers' propensity to complain eventually went up. Managing the propensity to complain provides more opportunity for a firm to distinguish itself from competitors.  相似文献   
133.
本文对客户资产中最为关键的计算因子——客户预期贡献,提出一种利用最小二乘法进行回归分析,拟合出客户预期贡献的计算函数,并将其运用到客户资产计算公式中,建立客户资产度量模型.本文还以中国建设银行某支行餐饮娱乐业固定资产贷款业务为案例,阐明了该方法的应用,并对计算出的客户资产结果进行了拟合优度检验和显著性检验.  相似文献   
134.
刘伟  游静 《管理工程学报》2008,22(3):141-145
通过建立知识积累曲线模型以及结合模型进行案例分析,研究信息系统整合过程中知识共享程度、系统创新度以及知识获取能力对系统整合时间进度的影响,得到影响因素变化时信息系统整合的时间进度变化趋势以及在知识创新度和知识获取能力不同的条件下,信息系统整合时间进度节约对知识共享的依赖程度.  相似文献   
135.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   
136.
In this paper, we derive sequential conditional probability ratio tests to compare diagnostic tests without distributional assumptions on test results. The test statistics in our method are nonparametric weighted areas under the receiver-operating characteristic curves. By using the new method, the decision of stopping the diagnostic trial early is unlikely to be reversed should the trials continue to the planned end. The conservatism reflected in this approach to have more conservative stopping boundaries during the course of the trial is especially appealing for diagnostic trials since the end point is not death. In addition, the maximum sample size of our method is not greater than a fixed sample test with similar power functions. Simulation studies are performed to evaluate the properties of the proposed sequential procedure. We illustrate the method using data from a thoracic aorta imaging study.  相似文献   
137.
138.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
139.
In many diagnostic studies, multiple diagnostic tests are performed on each subject or multiple disease markers are available. Commonly, the information should be combined to improve the diagnostic accuracy. We consider the problem of comparing the discriminatory abilities between two groups of biomarkers. Specifically, this article focuses on confidence interval estimation of the difference between paired AUCs based on optimally combined markers under the assumption of multivariate normality. Simulation studies demonstrate that the proposed generalized variable approach provides confidence intervals with satisfying coverage probabilities at finite sample sizes. The proposed method can also easily provide P-values for hypothesis testing. Application to analysis of a subset of data from a study on coronary heart disease illustrates the utility of the method in practice.  相似文献   
140.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号