全文获取类型
收费全文 | 2999篇 |
免费 | 76篇 |
国内免费 | 19篇 |
专业分类
管理学 | 259篇 |
民族学 | 3篇 |
人口学 | 35篇 |
丛书文集 | 28篇 |
理论方法论 | 84篇 |
综合类 | 307篇 |
社会学 | 56篇 |
统计学 | 2322篇 |
出版年
2023年 | 13篇 |
2022年 | 13篇 |
2021年 | 19篇 |
2020年 | 55篇 |
2019年 | 114篇 |
2018年 | 130篇 |
2017年 | 219篇 |
2016年 | 75篇 |
2015年 | 78篇 |
2014年 | 89篇 |
2013年 | 795篇 |
2012年 | 243篇 |
2011年 | 75篇 |
2010年 | 87篇 |
2009年 | 81篇 |
2008年 | 106篇 |
2007年 | 107篇 |
2006年 | 98篇 |
2005年 | 73篇 |
2004年 | 64篇 |
2003年 | 64篇 |
2002年 | 69篇 |
2001年 | 54篇 |
2000年 | 45篇 |
1999年 | 50篇 |
1998年 | 31篇 |
1997年 | 27篇 |
1996年 | 23篇 |
1995年 | 18篇 |
1994年 | 19篇 |
1993年 | 18篇 |
1992年 | 25篇 |
1991年 | 18篇 |
1990年 | 8篇 |
1989年 | 6篇 |
1988年 | 15篇 |
1987年 | 7篇 |
1986年 | 8篇 |
1985年 | 12篇 |
1984年 | 8篇 |
1983年 | 10篇 |
1982年 | 6篇 |
1981年 | 6篇 |
1980年 | 5篇 |
1979年 | 3篇 |
1978年 | 1篇 |
1977年 | 3篇 |
1975年 | 1篇 |
排序方式: 共有3094条查询结果,搜索用时 880 毫秒
41.
We study, from the standpoint of coherence, comparative probabilities on an arbitrary familyE of conditional events. Given a binary relation ·, coherence conditions on · are related to de Finetti's coherent betting system: we consider their connections to the usual properties of comparative probability and to the possibility of numerical representations of ·. In this context, the numerical reference frame is that of de Finetti's coherent subjective conditional probability, which is not introduced (as in Kolmogoroff's approach) through a ratio between probability measures.Another relevant feature of our approach is that the family & need not have any particular algebraic structure, so that the ordering can be initially given for a few conditional events of interest and then possibly extended by a step-by-step procedure, preserving coherence. 相似文献
42.
Louis Marinoff 《Theory and Decision》1993,35(1):55-73
In quantum domains, the measurement (or observation) of one of a pair of complementary variables introduces an unavoidable uncertainty in the value of that variable's complement. Such uncertainties are negligible in Newtonian worlds, where observations can be made without appreciably disturbing the observed system. Hence, one would not expect that an observation of a non-quantum probabilistic outcome could affect a probability distribution over subsequently possible states, in a way that would conflict with classical probability calculations. This paper examines three problems in which observations appear to affect the probabilities and expected utilities of subsequent outcomes, in ways which may appear paradoxical. Deeper analysis of these problems reveals that the anomalies arise, not from paradox, but rather from faulty inferences drawn from the observations themselves. Thus the notion of quantum decision theory is disparaged. 相似文献
43.
On the Effect of Probability Distributions of Input Variables in Public Health Risk Assessment 总被引:1,自引:0,他引:1
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables. 相似文献
44.
Estimation from Zero-Failure Data 总被引:2,自引:0,他引:2
Robert T. Bailey 《Risk analysis》1997,17(3):375-380
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques. 相似文献
45.
基于MCMC稳态模拟的贝叶斯经验费率厘定信用模型 总被引:2,自引:2,他引:2
B黨lmann-Straub model is one of the most famous applications of the Bayesian method for the experience rate making.However,by the traditional B黨lmann-Straub model one cannot get the unbiased posterior estimation of the parameters when there is not sufficient prior information for the structural parameters;What's more,the difficult of computing high dimension numeration limits the application of Bayesian method.This paper introduces the Markov chain Monte Carlo simulaton method based on the Gibbs sampling after analyzing the structure of the B黨lmann-Straub model and sets up the Bayesian credibility model for estimating the predictive risk premium.Also by using the results of the numeration analysis,this paper proves that from this model one can get the posterior distributions of the parameters dynamically and the posterior estimation of the censoring parameters in the situation that exists unknown parameters,as well as improve the precision of the numeration,which can be helpful to find the heterogeneity of the premium. 相似文献
46.
47.
Kung-Jong Lui 《统计学通讯:模拟与计算》2016,45(7):2562-2576
We develop four asymptotic interval estimators and one exact interval estimator for the odds ratio (OR) under stratified random sampling with matched pairs. We apply Monte Carlo simulation to evaluate the performance of these five interval estimators. We note that the conditional score test-based interval estimator with a monotonic transformation and the interval estimator based on the Mantel–Haenszel (MH) type point estimator with the logarithmic transformation are generally preferable to the others considered here. We also note that the conditional exact confidence interval can be of use when the total number of matched pairs with discordant responses is small. 相似文献
48.
《Journal of Statistical Computation and Simulation》2012,82(4):802-823
The exponential–Poisson (EP) distribution with scale and shape parameters β>0 and λ∈?, respectively, is a lifetime distribution obtained by mixing exponential and zero-truncated Poisson models. The EP distribution has been a good alternative to the gamma distribution for modelling lifetime, reliability and time intervals of successive natural disasters. Both EP and gamma distributions have some similarities and properties in common, for example, their densities may be strictly decreasing or unimodal, and their hazard rate functions may be decreasing, increasing or constant depending on their shape parameters. On the other hand, the EP distribution has several interesting applications based on stochastic representations involving maximum and minimum of iid exponential variables (with random sample size) which make it of distinguishable scientific importance from the gamma distribution. Given the similarities and different scientific relevance between these models, one question of interest is how to discriminate them. With this in mind, we propose a likelihood ratio test based on Cox's statistic to discriminate the EP and gamma distributions. The asymptotic distribution of the normalized logarithm of the ratio of the maximized likelihoods under two null hypotheses – data come from EP or gamma distributions – is provided. With this, we obtain the probabilities of correct selection. Hence, we propose to choose the model that maximizes the probability of correct selection (PCS). We also determinate the minimum sample size required to discriminate the EP and gamma distributions when the PCS and a given tolerance level based on some distance are before stated. A simulation study to evaluate the accuracy of the asymptotic probabilities of correct selection is also presented. The paper is motivated by two applications to real data sets. 相似文献
49.
《Journal of Statistical Computation and Simulation》2012,82(9):829-841
For a normal distribution with known variance, the standard confidence interval of the location parameter is derived from the classical Neyman procedure. When the parameter space is known to be restricted, the standard confidence interval is arguably unsatisfactory. Recent articles have addressed this problem and proposed confidence intervals for the mean of a normal distribution where the parameter space is not less than zero. In this article, we propose a new confidence interval, rp interval, and derive the Bayesian credible interval and likelihood ratio interval for general restricted parameter space. We compare these intervals with the standard interval and the minimax interval. Simulation studies are undertaken to assess the performances of these confidence intervals. 相似文献
50.
《Journal of Statistical Computation and Simulation》2012,82(8):903-914
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans. 相似文献