首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1292篇
  免费   42篇
管理学   215篇
民族学   18篇
人口学   168篇
丛书文集   7篇
理论方法论   166篇
综合类   40篇
社会学   583篇
统计学   137篇
  2024年   5篇
  2023年   26篇
  2022年   12篇
  2021年   21篇
  2020年   41篇
  2019年   69篇
  2018年   61篇
  2017年   61篇
  2016年   83篇
  2015年   51篇
  2014年   70篇
  2013年   156篇
  2012年   71篇
  2011年   64篇
  2010年   36篇
  2009年   45篇
  2008年   39篇
  2007年   45篇
  2006年   20篇
  2005年   36篇
  2004年   35篇
  2003年   29篇
  2002年   31篇
  2001年   32篇
  2000年   14篇
  1999年   17篇
  1998年   18篇
  1997年   12篇
  1996年   9篇
  1995年   11篇
  1994年   9篇
  1993年   7篇
  1992年   7篇
  1991年   7篇
  1990年   8篇
  1989年   6篇
  1988年   4篇
  1987年   5篇
  1986年   9篇
  1985年   7篇
  1984年   5篇
  1983年   4篇
  1981年   3篇
  1980年   3篇
  1979年   7篇
  1977年   3篇
  1976年   3篇
  1975年   3篇
  1974年   3篇
  1972年   2篇
排序方式: 共有1334条查询结果,搜索用时 15 毫秒
11.
Axiomatizations of the normalized Banzhaf value and the Shapley value   总被引:1,自引:1,他引:0  
A cooperative game with transferable utilities– or simply a TU-game – describes a situation in which players can obtain certain payoffs by cooperation. A solution concept for these games is a function which assigns to every such a game a distribution of payoffs over the players in the game. Famous solution concepts for TU-games are the Shapley value and the Banzhaf value. Both solution concepts have been axiomatized in various ways. An important difference between these two solution concepts is the fact that the Shapley value always distributes the payoff that can be obtained by the `grand coalition' consisting of all players cooperating together while the Banzhaf value does not satisfy this property, i.e., the Banzhaf value is not efficient. In this paper we consider the normalized Banzhaf value which distributes the payoff that can be obtained by the `grand coalition' proportional to the Banzhaf values of the players. This value does not satisfy certain axioms underlying the Banzhaf value. In this paper we introduce some new axioms that characterize the normalized Banzhaf value. We also provide an axiomatization of the Shapley value using similar axioms. Received: 10 April 1996 / Accepted: 2 June 1997  相似文献   
12.
Studies on population history are often based on incomplete records of life histories. For instance, in studies using data obtained from family reconstitution, the date of death is right censored (by migration) and the censoring time is never observed. Several methods for the correction of mortality estimates are proposed in the literature, most of which first estimate the number of individuals at risk and then use standard techniques to estimate mortality. Other methods are based on statistical models. In this paper all methods are reviewed, and their merits are compared by applying them to simulated and to seventeenth-century data from the English parish of Reigate. An ad hoc method proposed by Ruggles performs reasonably well. Methods based on statistical models, provided they are sufficiently realistic, give comparable accuracy and allow the estimation of several other quantities of interest, such as the distribution of migration times.  相似文献   
13.
Woody M. Liao 《决策科学》1979,10(1):116-125
Learning curves have important implications for managerial planning and control. This paper considers the effect of learning on managerial planning models for productmix problems that can be handled by a linear-programming formulation. An approach to incorporate learning effects in the planning model is proposed in this paper. The feasibility and superiority of the proposed approach over the traditional approach are discussed through the use of a linear-programming problem.  相似文献   
14.
G = F k (k > 1); G = 1 − (1−F) k (k < 1); G = F k (k < 1); and G = 1 − (1−F) k (k > 1), where F and G are two continuous cumulative distribution functions. If an optimal precedence test (one with the maximal power) is determined for one of these four classes, the optimal tests for the other classes of alternatives can be derived. Application of this is given using the results of Lin and Sukhatme (1992) who derived the best precedence test for testing the null hypothesis that the lifetimes of two types of items on test have the same distibution. The test has maximum power for fixed κ in the class of alternatives G = 1 − (1−F) k , with k < 1. Best precedence tests for the other three classes of Lehmann-type alternatives are derived using their results. Finally, a comparison of precedence tests with Wilcoxon's two-sample test is presented. Received: February 22, 1999; revised version: June 7, 2000  相似文献   
15.
The generalized standard two-sided power (GTSP) distribution was mentioned only in passing by Kotz and van Dorp Beyond Beta, Other Continuous Families of Distributions with Bounded Support and Applications, World Scientific Press, Singapore, 2004. In this paper, we shall further investigate this three-parameter distribution by presenting some novel properties and use its more general form to contrast the chronology of developments of various authors on the two-parameter TSP distribution since its initial introduction. GTSP distributions allow for J-shaped forms of its pdf, whereas TSP distributions are limited to U-shaped and unimodal forms. Hence, GTSP distributions possess the same three distributional shapes as the classical beta distributions. A novel method and algorithm for the indirect elicitation of the two-power parameters of the GTSP distribution is developed. We present a Project Evaluation Review Technique example that utilizes this algorithm and demonstrates the benefit of separate powers for the two branches of activity GTSP distributions for project completion time uncertainty estimation.  相似文献   
16.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
17.
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence of [Formula: See Text] at a fixed point t if the density has a negative derivative at t. The same rate is obtained by a kernel estimator of bandwidth [Formula: See Text], but the limit distributions are different. If the density is both differentiable at t and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence [Formula: See Text], and compare the limit distributions of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behaviour of a kernel estimator with a bandwidth larger than [Formula: See Text], in the case that the density is known to have more than one derivative.  相似文献   
18.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples.  相似文献   
19.
We introduce a class of random fields that can be understood as discrete versions of multicolour polygonal fields built on regular linear tessellations. We focus first on a subclass of consistent polygonal fields, for which we show Markovianity and solvability by means of a dynamic representation. This representation is used to design new sampling techniques for Gibbsian modifications of such fields, a class which covers lattice‐based random fields. A flux‐based modification is applied to the extraction of the field tracks network from a Synthetic Aperture Radar image of a rural area.  相似文献   
20.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号