全文获取类型
收费全文 | 1939篇 |
免费 | 61篇 |
国内免费 | 21篇 |
专业分类
管理学 | 259篇 |
民族学 | 7篇 |
人口学 | 34篇 |
丛书文集 | 77篇 |
理论方法论 | 95篇 |
综合类 | 545篇 |
社会学 | 61篇 |
统计学 | 943篇 |
出版年
2024年 | 2篇 |
2023年 | 10篇 |
2022年 | 9篇 |
2021年 | 19篇 |
2020年 | 40篇 |
2019年 | 53篇 |
2018年 | 61篇 |
2017年 | 102篇 |
2016年 | 48篇 |
2015年 | 58篇 |
2014年 | 64篇 |
2013年 | 402篇 |
2012年 | 149篇 |
2011年 | 74篇 |
2010年 | 65篇 |
2009年 | 68篇 |
2008年 | 80篇 |
2007年 | 90篇 |
2006年 | 81篇 |
2005年 | 78篇 |
2004年 | 56篇 |
2003年 | 71篇 |
2002年 | 63篇 |
2001年 | 54篇 |
2000年 | 24篇 |
1999年 | 26篇 |
1998年 | 15篇 |
1997年 | 16篇 |
1996年 | 13篇 |
1995年 | 13篇 |
1994年 | 15篇 |
1993年 | 14篇 |
1992年 | 19篇 |
1991年 | 10篇 |
1990年 | 4篇 |
1989年 | 5篇 |
1988年 | 10篇 |
1987年 | 7篇 |
1986年 | 5篇 |
1985年 | 6篇 |
1984年 | 5篇 |
1983年 | 3篇 |
1982年 | 2篇 |
1981年 | 4篇 |
1980年 | 3篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1977年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有2021条查询结果,搜索用时 944 毫秒
21.
Standard algorithms for the construction of iterated bootstrap confidence intervals are computationally very demanding, requiring nested levels of bootstrap resampling. We propose an alternative approach to constructing double bootstrap confidence intervals that involves replacing the inner level of resampling by an analytical approximation. This approximation is based on saddlepoint methods and a tail probability approximation of DiCiccio and Martin (1991). Our technique significantly reduces the computational expense of iterated bootstrap calculations. A formal algorithm for the construction of our approximate iterated bootstrap confidence intervals is presented, and some crucial practical issues arising in its implementation are discussed. Our procedure is illustrated in the case of constructing confidence intervals for ratios of means using both real and simulated data. We repeat an experiment of Schenker (1985) involving the construction of bootstrap confidence intervals for a variance and demonstrate that our technique makes feasible the construction of accurate bootstrap confidence intervals in that context. Finally, we investigate the use of our technique in a more complex setting, that of constructing confidence intervals for a correlation coefficient. 相似文献
22.
Robert F. Nau 《Journal of Risk and Uncertainty》1995,10(1):71-91
This article explores the extent to which a decision maker's probabilities can be measured separately from his/her utilities by observing his/her acceptance of small monetary gambles. Only a partial separation is achieved: the acceptable gambles are partitioned into a set of belief gambles, which reveals probabilities distorted by marginal utilities for money, and a set of preference gambles, which reveals utilities reciprocally distorted by marginal utilities for money. However, the information in these gambles still enables us to solve the decision maker's problem: his/her utility-maximizing decision is the one that avoids arbitrage (i.e., incoherence or Dutch books). 相似文献
23.
We study, from the standpoint of coherence, comparative probabilities on an arbitrary familyE of conditional events. Given a binary relation ·, coherence conditions on · are related to de Finetti's coherent betting system: we consider their connections to the usual properties of comparative probability and to the possibility of numerical representations of ·. In this context, the numerical reference frame is that of de Finetti's coherent subjective conditional probability, which is not introduced (as in Kolmogoroff's approach) through a ratio between probability measures.Another relevant feature of our approach is that the family & need not have any particular algebraic structure, so that the ordering can be initially given for a few conditional events of interest and then possibly extended by a step-by-step procedure, preserving coherence. 相似文献
24.
Louis Marinoff 《Theory and Decision》1993,35(1):55-73
In quantum domains, the measurement (or observation) of one of a pair of complementary variables introduces an unavoidable uncertainty in the value of that variable's complement. Such uncertainties are negligible in Newtonian worlds, where observations can be made without appreciably disturbing the observed system. Hence, one would not expect that an observation of a non-quantum probabilistic outcome could affect a probability distribution over subsequently possible states, in a way that would conflict with classical probability calculations. This paper examines three problems in which observations appear to affect the probabilities and expected utilities of subsequent outcomes, in ways which may appear paradoxical. Deeper analysis of these problems reveals that the anomalies arise, not from paradox, but rather from faulty inferences drawn from the observations themselves. Thus the notion of quantum decision theory is disparaged. 相似文献
25.
On the Effect of Probability Distributions of Input Variables in Public Health Risk Assessment 总被引:1,自引:0,他引:1
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables. 相似文献
26.
Estimation from Zero-Failure Data 总被引:2,自引:0,他引:2
Robert T. Bailey 《Risk analysis》1997,17(3):375-380
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques. 相似文献
27.
28.
《Journal of Statistical Computation and Simulation》2012,82(4):802-823
The exponential–Poisson (EP) distribution with scale and shape parameters β>0 and λ∈?, respectively, is a lifetime distribution obtained by mixing exponential and zero-truncated Poisson models. The EP distribution has been a good alternative to the gamma distribution for modelling lifetime, reliability and time intervals of successive natural disasters. Both EP and gamma distributions have some similarities and properties in common, for example, their densities may be strictly decreasing or unimodal, and their hazard rate functions may be decreasing, increasing or constant depending on their shape parameters. On the other hand, the EP distribution has several interesting applications based on stochastic representations involving maximum and minimum of iid exponential variables (with random sample size) which make it of distinguishable scientific importance from the gamma distribution. Given the similarities and different scientific relevance between these models, one question of interest is how to discriminate them. With this in mind, we propose a likelihood ratio test based on Cox's statistic to discriminate the EP and gamma distributions. The asymptotic distribution of the normalized logarithm of the ratio of the maximized likelihoods under two null hypotheses – data come from EP or gamma distributions – is provided. With this, we obtain the probabilities of correct selection. Hence, we propose to choose the model that maximizes the probability of correct selection (PCS). We also determinate the minimum sample size required to discriminate the EP and gamma distributions when the PCS and a given tolerance level based on some distance are before stated. A simulation study to evaluate the accuracy of the asymptotic probabilities of correct selection is also presented. The paper is motivated by two applications to real data sets. 相似文献
29.
《Journal of Statistical Computation and Simulation》2012,82(9):829-841
For a normal distribution with known variance, the standard confidence interval of the location parameter is derived from the classical Neyman procedure. When the parameter space is known to be restricted, the standard confidence interval is arguably unsatisfactory. Recent articles have addressed this problem and proposed confidence intervals for the mean of a normal distribution where the parameter space is not less than zero. In this article, we propose a new confidence interval, rp interval, and derive the Bayesian credible interval and likelihood ratio interval for general restricted parameter space. We compare these intervals with the standard interval and the minimax interval. Simulation studies are undertaken to assess the performances of these confidence intervals. 相似文献
30.
Statistical process monitoring (SPM) is a very efficient tool to maintain and to improve the quality of a product. In many industrial processes, end product has two or more attribute-type quality characteristics. Some of them are independent, but the observations are Markovian dependent. It is essential to develop a control chart for such situations. In this article, we develop an Independent Attributes Control Chart for Markov Dependent Processes based on error probabilities criterion under the assumption of one-step Markov dependency. Implementation of the chart is similar to that of Shewhart-type chart. Performance of the chart has been studied using probability of detecting shift criterion. A procedure to identify the attribute(s) responsible for out-of-control status of the process is given. 相似文献