全文获取类型
收费全文 | 928篇 |
免费 | 22篇 |
专业分类
管理学 | 160篇 |
民族学 | 4篇 |
人才学 | 1篇 |
人口学 | 53篇 |
丛书文集 | 9篇 |
理论方法论 | 149篇 |
综合类 | 2篇 |
社会学 | 469篇 |
统计学 | 103篇 |
出版年
2023年 | 5篇 |
2021年 | 4篇 |
2020年 | 18篇 |
2019年 | 9篇 |
2018年 | 12篇 |
2017年 | 28篇 |
2016年 | 27篇 |
2015年 | 19篇 |
2014年 | 20篇 |
2013年 | 147篇 |
2012年 | 37篇 |
2011年 | 23篇 |
2010年 | 31篇 |
2009年 | 22篇 |
2008年 | 30篇 |
2007年 | 26篇 |
2006年 | 42篇 |
2005年 | 27篇 |
2004年 | 35篇 |
2003年 | 36篇 |
2002年 | 26篇 |
2001年 | 13篇 |
2000年 | 19篇 |
1999年 | 19篇 |
1998年 | 20篇 |
1997年 | 15篇 |
1996年 | 18篇 |
1995年 | 12篇 |
1994年 | 20篇 |
1993年 | 21篇 |
1992年 | 17篇 |
1991年 | 5篇 |
1990年 | 14篇 |
1989年 | 13篇 |
1988年 | 12篇 |
1987年 | 11篇 |
1986年 | 6篇 |
1985年 | 12篇 |
1984年 | 10篇 |
1983年 | 12篇 |
1982年 | 3篇 |
1981年 | 12篇 |
1980年 | 4篇 |
1979年 | 8篇 |
1978年 | 3篇 |
1977年 | 6篇 |
1976年 | 6篇 |
1975年 | 3篇 |
1974年 | 2篇 |
1973年 | 5篇 |
排序方式: 共有950条查询结果,搜索用时 0 毫秒
361.
Gary Smith 《The American statistician》2013,67(4):231-235
Baseball performances are an imperfect measure of baseball abilities, and consequently exaggerate differences in abilities. Predictions of relative batting averages and earned run averages can be improved substantially by using correlation coefficients estimated from earlier seasons to shrink performances toward the mean. 相似文献
362.
Baiming Zou Xinlei Mi Patrick J. Tighe Gary G. Koch Fei Zou 《Pharmaceutical statistics》2021,20(4):752-764
Post marketing data offer rich information and cost-effective resources for physicians and policy-makers to address some critical scientific questions in clinical practice. However, the complex confounding structures (e.g., nonlinear and nonadditive interactions) embedded in these observational data often pose major analytical challenges for proper analysis to draw valid conclusions. Furthermore, often made available as electronic health records (EHRs), these data are usually massive with hundreds of thousands observational records, which introduce additional computational challenges. In this paper, for comparative effectiveness analysis, we propose a statistically robust yet computationally efficient propensity score (PS) approach to adjust for the complex confounding structures. Specifically, we propose a kernel-based machine learning method for flexibly and robustly PS modeling to obtain valid PS estimation from observational data with complex confounding structures. The estimated propensity score is then used in the second stage analysis to obtain the consistent average treatment effect estimate. An empirical variance estimator based on the bootstrap is adopted. A split-and-merge algorithm is further developed to reduce the computational workload of the proposed method for big data, and to obtain a valid variance estimator of the average treatment effect estimate as a by-product. As shown by extensive numerical studies and an application to postoperative pain EHR data comparative effectiveness analysis, the proposed approach consistently outperforms other competing methods, demonstrating its practical utility. 相似文献
363.
New optimization and heuristic methods are described to address supply chain management problems in distributed manufacturing settings. Specifically, integer programming formulations and heuristic methods are developed to design and evaluate optimal or near-optimal delivery plans for material movements between sites in a truckload trucking environment for the benefit of carriers, customers and professional drivers. The tools developed herein are appropriate for examining delivery needs between suppliers, manufacturers, distribution centres, and customer locations. They are equally applicable to more complex situations involving the return of packaging materials to the original shipment site, or even concurrent consideration of multiple business entities with various shipment profiles. Realistically sized case studies are provided to demonstrate the efficacy of the approaches using data supplied by J.B. Hunt Transport, Inc. 相似文献
364.
Gary N. McLean 《Human Resource Development International》2013,16(3):351-354
Most academic institutions convey to faculty that they must have a clearly identified single or dual stream of research. Such a standard inhibits expertise. It stifles creativity and innovation and results in faculty unable to deal effectively with a broad range of doctoral students. People like Leonardo da Vinci, and other outstanding experts, fall outside of such a standard and, in today's academic world, would be denied promotion and tenure. Such artificial standards forcing all faculties to behave in the same way must be set aside. 相似文献
365.
Gary Witt 《统计学通讯:理论与方法》2014,43(20):4265-4280
This article describes a generalization of the binomial distribution. The closed form probability function for the probability of k successes out of n correlated, exchangeable Bernoulli trials depends on the number of trials and its two parameters: the common success probability and the common correlation. The distribution is derived under the assumption that the common correlation between all pairs of Bernoulli trials remains unchanged conditional on successes in all completed trials. The distribution was developed to model bond defaults but may be suited to biostatistical applications involving clusters of binary data encountered in repeated measurements or toxicity studies of families of organisms. Maximum likelihood estimates for the parameters of the distribution are found for a set of binary data from a developmental toxicity study on litters of mice. 相似文献
366.
Gary Yukl 《The Leadership Quarterly》2009,20(1):49-53
This essay conveys some of the author's ideas about the influence of leaders on organizational learning. Limitations of some well known leadership theories for explaining this influence are described, and ideas for developing more comprehensive and accurate theories are suggested. Examples of specific ways leaders can influence organizational learning are provided. The methods used for most of the research on the subject are evaluated, and some alternative methods are suggested. 相似文献
367.
We consider the test of the null hypothesis that the largest mean in a mixture of an unknown number of normal components is less than or equal to a given threshold. This test is motivated by the problem of assessing whether the Soviet Union has been operating in compliance with the Nuclear Test Ban Treaty. In our analysis, the number of normal components is determined using Akaike's Information Criterion while the hypothesis test itself is based on asymptotic results given by Behboodian for a mixture of two normal components. A bootstrap approach is also considered for estimating the standard error of the largest estimated mean. The performance of the testa are examined through the use of simulation. 相似文献
368.
In tumorigenicity experiments, each animal begins in a tumor-free state and then either develops a tumor or dies before developing a tumor. Animals that develop a tumor either die from the tumor or from other competing causes. All surviving animals are sacrificed at the end of the experiment, normally two years. The two most commonly used statistical tests are the logrank test for comparing hazards of death from rapidly lethal tumors and the Hoel-Walburg test for comparing prevalences of nonlethal tumors. However, the data obtained from a carcinogenicity experiment generally contains a mixture of fatal and incidental tumors. Peto et al.(1980)suggested combining the fatal and incidental tests for a comparison of tumor onset distributions. Extensive simulations show that the trend test for tumor onset using the Peto procedure has the proper size, under the simulation constraints, when each group has identical mortality patterns, and the test with continuity correction tends to be conservative. When the animals n the dosed groups have reduced survival rates, the type I error rate is likely to exceed the nominal level. The continuity correction is recommended for a small reduction in survival time among the dosed groups to ensure the proper size. However, when there is a large reduction in survival times in the dosed groups, the onset test does not have the proper size. 相似文献
369.
A message coming out of the recent Bayesian literature on cointegration is that it is important to elicit a prior on the space spanned by the cointegrating vectors (as opposed to a particular identified choice for these vectors). In previous work, such priors have been found to greatly complicate computation. In this article, we develop algorithms to carry out efficient posterior simulation in cointegration models. In particular, we develop a collapsed Gibbs sampling algorithm which can be used with just-identifed models and demonstrate that it has very large computational advantages relative to existing approaches. For over-identifed models, we develop a parameter-augmented Gibbs sampling algorithm and demonstrate that it also has attractive computational properties. 相似文献
370.
This article considers spatial data z( s 1), z( s 2),…, z( s n ) collected at n locations, with the objective of predicting z( s 0) at another location. The usual method of analysis for this problem is kriging, but here we introduce a new signal-plus-noise model whose essential feature is the identification of hot spots. The signal decays in relation to distance from hot spots. We show that hot spots can be located with high accuracy and that the decay parameter can be estimated accurately. This new model compares well to kriging in simulations. 相似文献