全文获取类型
收费全文 | 2123篇 |
免费 | 70篇 |
专业分类
管理学 | 244篇 |
民族学 | 20篇 |
人口学 | 228篇 |
丛书文集 | 9篇 |
理论方法论 | 301篇 |
综合类 | 46篇 |
社会学 | 1180篇 |
统计学 | 165篇 |
出版年
2024年 | 6篇 |
2023年 | 42篇 |
2022年 | 17篇 |
2021年 | 33篇 |
2020年 | 66篇 |
2019年 | 107篇 |
2018年 | 88篇 |
2017年 | 104篇 |
2016年 | 114篇 |
2015年 | 76篇 |
2014年 | 113篇 |
2013年 | 405篇 |
2012年 | 106篇 |
2011年 | 92篇 |
2010年 | 55篇 |
2009年 | 57篇 |
2008年 | 69篇 |
2007年 | 71篇 |
2006年 | 40篇 |
2005年 | 49篇 |
2004年 | 44篇 |
2003年 | 47篇 |
2002年 | 38篇 |
2001年 | 42篇 |
2000年 | 19篇 |
1999年 | 23篇 |
1998年 | 22篇 |
1997年 | 15篇 |
1996年 | 17篇 |
1995年 | 26篇 |
1994年 | 22篇 |
1993年 | 14篇 |
1992年 | 21篇 |
1991年 | 22篇 |
1990年 | 13篇 |
1989年 | 9篇 |
1988年 | 7篇 |
1987年 | 9篇 |
1986年 | 12篇 |
1985年 | 9篇 |
1984年 | 5篇 |
1983年 | 4篇 |
1982年 | 3篇 |
1981年 | 3篇 |
1980年 | 3篇 |
1979年 | 8篇 |
1977年 | 5篇 |
1976年 | 3篇 |
1975年 | 4篇 |
1974年 | 3篇 |
排序方式: 共有2193条查询结果,搜索用时 21 毫秒
21.
Thorsten van der Velten Marketing Manager Dr H. Igor Ansoff Distinguished Professor 《Long Range Planning》1998,31(6):879-885
This study analyzes the behavior of top managers in managing their firms' business portfolios. A review of the existing literature provides a conceptual framework for a research design combining quantitative and qualitative techniques. Data were collected through interviews with 35 top managers of well-known German business firms and the findings of this study translated into guidelines for managers. The results suggest that successful top managers should be concerned with the social relationships and management processes within the corporate office, as well as with their use of management concepts in managing the business portfolio. 相似文献
22.
G = F
k
(k > 1); G = 1 − (1−F)
k
(k < 1); G = F
k
(k < 1); and G = 1 − (1−F)
k
(k > 1), where F and G are two continuous cumulative distribution functions. If an optimal precedence test (one with the maximal power) is determined
for one of these four classes, the optimal tests for the other classes of alternatives can be derived. Application of this
is given using the results of Lin and Sukhatme (1992) who derived the best precedence test for testing the null hypothesis
that the lifetimes of two types of items on test have the same distibution. The test has maximum power for fixed κ in the
class of alternatives G = 1 − (1−F)
k
, with k < 1. Best precedence tests for the other three classes of Lehmann-type alternatives are derived using their results. Finally,
a comparison of precedence tests with Wilcoxon's two-sample test is presented.
Received: February 22, 1999; revised version: June 7, 2000 相似文献
23.
Overcoming biases and misconceptions in ecological studies 总被引:1,自引:1,他引:1
Katherine A. Guthrie & Lianne Sheppard 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2001,164(1):141-154
The aggregate data study design provides an alternative group level analysis to ecological studies in the estimation of individual level health risks. An aggregate model is derived by aggregating a plausible individual level relative rate model within groups, such that population-based disease rates are modelled as functions of individual level covariate data. We apply an aggregate data method to a series of fictitious examples from a review paper by Greenland and Robins which illustrated the problems that can arise when using the results of ecological studies to make inference about individual health risks. We use simulated data based on their examples to demonstrate that the aggregate data approach can address many of the sources of bias that are inherent in typical ecological analyses, even though the limited between-region covariate variation in these examples reduces the efficiency of the aggregate study. The aggregate method has the potential to estimate exposure effects of interest in the presence of non-linearity, confounding at individual and group levels, effect modification, classical measurement error in the exposure and non-differential misclassification in the confounder. 相似文献
24.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
25.
M.N.M. van Lieshout 《Scandinavian Journal of Statistics》2013,40(4):734-751
We introduce a class of random fields that can be understood as discrete versions of multicolour polygonal fields built on regular linear tessellations. We focus first on a subclass of consistent polygonal fields, for which we show Markovianity and solvability by means of a dynamic representation. This representation is used to design new sampling techniques for Gibbsian modifications of such fields, a class which covers lattice‐based random fields. A flux‐based modification is applied to the extraction of the field tracks network from a Synthetic Aperture Radar image of a rural area. 相似文献
26.
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence of [Formula: See Text] at a fixed point t if the density has a negative derivative at t. The same rate is obtained by a kernel estimator of bandwidth [Formula: See Text], but the limit distributions are different. If the density is both differentiable at t and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence [Formula: See Text], and compare the limit distributions of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behaviour of a kernel estimator with a bandwidth larger than [Formula: See Text], in the case that the density is known to have more than one derivative. 相似文献
27.
Probability plots are often used to estimate the parameters of distributions. Using large sample properties of the empirical distribution function and order statistics, weights to stabilize the variance in order to perform weighted least squares regression are derived. Weighted least squares regression is then applied to the estimation of the parameters of the Weibull, and the Gumbel distribution. The weights are independent of the parameters of the distributions considered. Monte Carlo simulation shows that the weighted least-squares estimators outperform the usual least-squares estimators totally, especially in small samples. 相似文献
28.
Egbert A. van der Meulen 《统计学通讯:理论与方法》2013,42(5):699-708
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably. The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon. In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test. Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples. 相似文献
29.
José Manuel Herrerías-Velasco Rafael Herrerías-Pleguezuelo Johan René van Dorp 《Journal of applied statistics》2009,36(5):573-587
The generalized standard two-sided power (GTSP) distribution was mentioned only in passing by Kotz and van Dorp Beyond Beta, Other Continuous Families of Distributions with Bounded Support and Applications, World Scientific Press, Singapore, 2004. In this paper, we shall further investigate this three-parameter distribution by presenting some novel properties and use its more general form to contrast the chronology of developments of various authors on the two-parameter TSP distribution since its initial introduction. GTSP distributions allow for J-shaped forms of its pdf, whereas TSP distributions are limited to U-shaped and unimodal forms. Hence, GTSP distributions possess the same three distributional shapes as the classical beta distributions. A novel method and algorithm for the indirect elicitation of the two-power parameters of the GTSP distribution is developed. We present a Project Evaluation Review Technique example that utilizes this algorithm and demonstrates the benefit of separate powers for the two branches of activity GTSP distributions for project completion time uncertainty estimation. 相似文献
30.
F. W. A. van Poppel 《Revue europeenne de demographie》1985,1(4):347-373
Both published results from the 1930, 1947, 1960 and 1971 censuses and unpublished ones are used to examine the influence that religious denomination, socio-economic group and region exerted on the fertility of marriages contracted between 1876 and 1959. A theory formulated by Lesthaeghe and Wilson on the relation between modes of production and secularization and the pace of fertility decline in Western Europe offers — in combination with van Heek's views on the special position of Dutch Roman Catholicism — a starting point for an explanation of why the fertility decline of Roman Catholics, self-employed and agricultural labourers lagged behind. 相似文献