首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3633篇
  免费   94篇
  国内免费   10篇
管理学   621篇
民族学   7篇
人口学   25篇
丛书文集   113篇
理论方法论   67篇
综合类   537篇
社会学   211篇
统计学   2156篇
  2023年   39篇
  2022年   33篇
  2021年   54篇
  2020年   62篇
  2019年   105篇
  2018年   123篇
  2017年   216篇
  2016年   107篇
  2015年   101篇
  2014年   160篇
  2013年   645篇
  2012年   253篇
  2011年   152篇
  2010年   116篇
  2009年   144篇
  2008年   132篇
  2007年   138篇
  2006年   138篇
  2005年   111篇
  2004年   135篇
  2003年   108篇
  2002年   88篇
  2001年   79篇
  2000年   84篇
  1999年   66篇
  1998年   44篇
  1997年   38篇
  1996年   13篇
  1995年   11篇
  1994年   24篇
  1993年   21篇
  1992年   31篇
  1991年   32篇
  1990年   12篇
  1989年   14篇
  1988年   16篇
  1987年   9篇
  1986年   11篇
  1985年   10篇
  1984年   8篇
  1983年   16篇
  1982年   10篇
  1981年   9篇
  1980年   8篇
  1979年   4篇
  1978年   5篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3737条查询结果,搜索用时 0 毫秒
941.
We propose a distribution‐free entropy‐based methodology to calculate the expected value of an uncertainty reduction effort and present our results within the context of reducing demand uncertainty. In contrast to existing techniques, the methodology does not require a priori assumptions regarding the underlying demand distribution, does not require sampled observations to be the mechanism by which uncertainty is reduced, and provides an expectation of information value as opposed to an upper bound. In our methodology, a decision maker uses his existing knowledge combined with the maximum entropy principle to model both his present and potential future states of uncertainty as probability densities over all possible demand distributions. Modeling uncertainty in this way provides for a theoretically justified and intuitively satisfying method of valuing an uncertainty reduction effort without knowing the information to be revealed. We demonstrate the methodology's use in three different settings: (i) a newsvendor valuing knowledge of expected demand, (ii) a short life cycle product supply manager considering the adoption of a quick response strategy, and (iii) a revenue manager making a pricing decision with limited knowledge of the market potential for his product.  相似文献   
942.
U.S. Environment Protection Agency benchmark doses for dichotomous cancer responses are often estimated using a multistage model based on a monotonic dose‐response assumption. To account for model uncertainty in the estimation process, several model averaging methods have been proposed for risk assessment. In this article, we extend the usual parameter space in the multistage model for monotonicity to allow for the possibility of a hormetic dose‐response relationship. Bayesian model averaging is used to estimate the benchmark dose and to provide posterior probabilities for monotonicity versus hormesis. Simulation studies show that the newly proposed method provides robust point and interval estimation of a benchmark dose in the presence or absence of hormesis. We also apply the method to two data sets on carcinogenic response of rats to 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin.  相似文献   
943.
In this article, Bayesian networks are used to model semiconductor lifetime data obtained from a cyclic stress test system. The data of interest are a mixture of log‐normal distributions, representing two dominant physical failure mechanisms. Moreover, the data can be censored due to limited test resources. For a better understanding of the complex lifetime behavior, interactions between test settings, geometric designs, material properties, and physical parameters of the semiconductor device are modeled by a Bayesian network. Statistical toolboxes in MATLAB® have been extended and applied to find the best structure of the Bayesian network and to perform parameter learning. Due to censored observations Markov chain Monte Carlo (MCMC) simulations are employed to determine the posterior distributions. For model selection the automatic relevance determination (ARD) algorithm and goodness‐of‐fit criteria such as marginal likelihoods, Bayes factors, posterior predictive density distributions, and sum of squared errors of prediction (SSEP) are applied and evaluated. The results indicate that the application of Bayesian networks to semiconductor reliability provides useful information about the interactions between the significant covariates and serves as a reliable alternative to currently applied methods.  相似文献   
944.
针对家庭商业健康保险参保比例在[0,1]闭区间上取值的特点,本文基于Tobit模型给出了比例响应数据的贝叶斯分位数回归建模方法。通过引入回归系数的“Spike-and-slab”先验分布,应用EM算法我们提出了基于门限规则的贝叶斯变量选择方法。大量数值模拟研究验证了所提的贝叶斯变量选择方法的有效性,且具有易操作、计算量小等优点。最后,将此方法应用到家庭商业健康保险数据的实证分析,研究不同分位数水平下家庭健康保险参保比例的影响因素,得到了许多有意义的研究结果。  相似文献   
945.
A step toward a strategic foundation for rational expectations equilibrium is taken by considering a double auction with n buyers and m sellers with interdependent values and affiliated private information. If there are sufficiently many buyers and sellers, and their bids are restricted to a sufficiently fine discrete set of prices, then, generically, there is an equilibrium in nondecreasing bidding functions that is arbitrarily close to the unique fully revealing rational expectations equilibrium of the limit market with unrestricted bids and a continuum of agents. In particular, the large double‐auction equilibrium is almost efficient and almost fully aggregates the agents' information.  相似文献   
946.
本文概述了信息市场由隐性到显性,由萌芽时期到独立发展时期不同的历史阶段的发展过程及社会原因。  相似文献   
947.
Summary Misclassifications, or noises, in the sampling stage of a Bayesian scheme can seriously affect the values of decision criteria such as the Bayes Risk and the Expected Value of Sample Information. This problem does not seem to be much addressed in the existing literature. In this article, using an approach based on hypergeometric functions and numerical computation, we study the effects of these noises under the two most important loss functions: the quadratic and the absolute value. A numerical example illustrates these effects in a representative case, using both loss functions, and provides additional insights into the general problem. Research partially supported by NSERC grant A 9249 (Canada) and FICU Grant 2000/pas/13. The authors wish to thank colleagues at the University of Alberta in Edmonton, Canada, for very stimulating discussions, and an anonymous referee for drawing their attention to three relevant references that have enriched the content of this final version.  相似文献   
948.
The aim of a phase II clinical trial is to decide whether or not to develop an experimental therapy further through phase III clinical evaluation. In this paper, we present a Bayesian approach to the phase II trial, although we assume that subsequent phase III clinical trials will have standard frequentist analyses. The decision whether to conduct the phase III trial is based on the posterior predictive probability of a significant result being obtained. This fusion of Bayesian and frequentist techniques accepts the current paradigm for expressing objective evidence of therapeutic value, while optimizing the form of the phase II investigation that leads to it. By using prior information, we can assess whether a phase II study is needed at all, and how much or what sort of evidence is required. The proposed approach is illustrated by the design of a phase II clinical trial of a multi‐drug resistance modulator used in combination with standard chemotherapy in the treatment of metastatic breast cancer. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   
949.
Abstract.  Methodology for Bayesian inference is considered for a stochastic epidemic model which permits mixing on both local and global scales. Interest focuses on estimation of the within- and between-group transmission rates given data on the final outcome. The model is sufficiently complex that the likelihood of the data is numerically intractable. To overcome this difficulty, an appropriate latent variable is introduced, about which asymptotic information is known as the population size tends to infinity. This yields a method for approximate inference for the true model. The methods are applied to real data, tested with simulated data, and also applied to a simple epidemic model for which exact results are available for comparison.  相似文献   
950.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号