首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1420篇
  免费   41篇
  国内免费   18篇
管理学   196篇
民族学   1篇
人口学   22篇
丛书文集   19篇
理论方法论   77篇
综合类   220篇
社会学   9篇
统计学   935篇
  2023年   6篇
  2022年   8篇
  2021年   10篇
  2020年   32篇
  2019年   47篇
  2018年   48篇
  2017年   88篇
  2016年   38篇
  2015年   44篇
  2014年   40篇
  2013年   354篇
  2012年   115篇
  2011年   37篇
  2010年   45篇
  2009年   42篇
  2008年   54篇
  2007年   57篇
  2006年   55篇
  2005年   37篇
  2004年   28篇
  2003年   32篇
  2002年   35篇
  2001年   27篇
  2000年   18篇
  1999年   19篇
  1998年   12篇
  1997年   14篇
  1996年   12篇
  1995年   12篇
  1994年   14篇
  1993年   13篇
  1992年   18篇
  1991年   10篇
  1990年   4篇
  1989年   5篇
  1988年   10篇
  1987年   6篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   3篇
  1982年   2篇
  1981年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有1479条查询结果,搜索用时 0 毫秒
11.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   
12.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   
13.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient.  相似文献   
14.
Coherent decision analysis with inseparable probabilities and utilities   总被引:1,自引:0,他引:1  
This article explores the extent to which a decision maker's probabilities can be measured separately from his/her utilities by observing his/her acceptance of small monetary gambles. Only a partial separation is achieved: the acceptable gambles are partitioned into a set of belief gambles, which reveals probabilities distorted by marginal utilities for money, and a set of preference gambles, which reveals utilities reciprocally distorted by marginal utilities for money. However, the information in these gambles still enables us to solve the decision maker's problem: his/her utility-maximizing decision is the one that avoids arbitrage (i.e., incoherence or Dutch books).  相似文献   
15.
We study, from the standpoint of coherence, comparative probabilities on an arbitrary familyE of conditional events. Given a binary relation ·, coherence conditions on · are related to de Finetti's coherent betting system: we consider their connections to the usual properties of comparative probability and to the possibility of numerical representations of ·. In this context, the numerical reference frame is that of de Finetti's coherent subjective conditional probability, which is not introduced (as in Kolmogoroff's approach) through a ratio between probability measures.Another relevant feature of our approach is that the family & need not have any particular algebraic structure, so that the ordering can be initially given for a few conditional events of interest and then possibly extended by a step-by-step procedure, preserving coherence.  相似文献   
16.
In quantum domains, the measurement (or observation) of one of a pair of complementary variables introduces an unavoidable uncertainty in the value of that variable's complement. Such uncertainties are negligible in Newtonian worlds, where observations can be made without appreciably disturbing the observed system. Hence, one would not expect that an observation of a non-quantum probabilistic outcome could affect a probability distribution over subsequently possible states, in a way that would conflict with classical probability calculations. This paper examines three problems in which observations appear to affect the probabilities and expected utilities of subsequent outcomes, in ways which may appear paradoxical. Deeper analysis of these problems reveals that the anomalies arise, not from paradox, but rather from faulty inferences drawn from the observations themselves. Thus the notion of quantum decision theory is disparaged.  相似文献   
17.
Estimation from Zero-Failure Data   总被引:2,自引:0,他引:2  
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques.  相似文献   
18.
"可能"是一个概率模态算子,其核心情态语义特征为[+概率可能]。按照概率模态"概率值的连续标度法","可能"的赋值范围为"0相似文献   
19.
本文构建了基于条件概率积分变换的Copula函数选择方法,通过对条件概率积分变换下Anderson-Darling(AD)、Kolmogorov-Smirnov(KS)、Cramér-von Mises(CM)这三种统计量的比较,讨论在不同样本容量和变量维数下其对多种Copula函数的拟合效果。利用GSPTSE、INMEX.MX和NDX三大股指样本,将基于条件概率积分变换的Copula函数选择方法与核密度估计和极大似然估计选择法的效果进行系统比较。结果表明,基于条件概率积分变换的检验法可以有效解决多元Copula函数的选择问题,其拟合优度检验更精确、更稳定;核密度估计检验在大样本下比较稳定,而小样本下稳定性较差;相比之下,极大似然值检验法则不稳定。  相似文献   
20.
生存分析与股指涨跌的概率推断   总被引:1,自引:0,他引:1  
通过研究上证指数连涨连跌的收益率,发现其服从伽玛分布,并且可得出股指涨跌的条件概率,以及极值时的情况,进一步研究了成交量的影响,从而说明了技术分析特别是价量分析为何在实际中仍然被广泛运用.可认为随机游动假说是一种粗糙的近似,本研究也有助于加深对市场有效性理论的认识.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号