全文获取类型
收费全文 | 4737篇 |
免费 | 112篇 |
专业分类
管理学 | 756篇 |
民族学 | 32篇 |
人才学 | 1篇 |
人口学 | 377篇 |
丛书文集 | 33篇 |
理论方法论 | 506篇 |
综合类 | 50篇 |
社会学 | 2413篇 |
统计学 | 681篇 |
出版年
2023年 | 35篇 |
2022年 | 23篇 |
2021年 | 39篇 |
2020年 | 103篇 |
2019年 | 128篇 |
2018年 | 134篇 |
2017年 | 188篇 |
2016年 | 158篇 |
2015年 | 129篇 |
2014年 | 151篇 |
2013年 | 729篇 |
2012年 | 193篇 |
2011年 | 169篇 |
2010年 | 128篇 |
2009年 | 139篇 |
2008年 | 148篇 |
2007年 | 129篇 |
2006年 | 135篇 |
2005年 | 149篇 |
2004年 | 145篇 |
2003年 | 130篇 |
2002年 | 129篇 |
2001年 | 125篇 |
2000年 | 102篇 |
1999年 | 88篇 |
1998年 | 68篇 |
1997年 | 64篇 |
1996年 | 61篇 |
1995年 | 58篇 |
1994年 | 57篇 |
1993年 | 63篇 |
1992年 | 50篇 |
1991年 | 51篇 |
1990年 | 41篇 |
1989年 | 28篇 |
1988年 | 48篇 |
1987年 | 39篇 |
1986年 | 40篇 |
1985年 | 35篇 |
1984年 | 50篇 |
1983年 | 39篇 |
1982年 | 38篇 |
1981年 | 41篇 |
1980年 | 36篇 |
1979年 | 34篇 |
1978年 | 24篇 |
1977年 | 27篇 |
1976年 | 18篇 |
1975年 | 29篇 |
1974年 | 26篇 |
排序方式: 共有4849条查询结果,搜索用时 12 毫秒
91.
Melissa R.W. George Na Yang Jessalyn Smith Thomas Jaki Daniel J. Feaster 《Journal of Statistical Computation and Simulation》2013,83(4):759-772
Mild to moderate skew in errors can substantially impact regression mixture model results; one approach for overcoming this includes transforming the outcome into an ordered categorical variable and using a polytomous regression mixture model. This is effective for retaining differential effects in the population; however, bias in parameter estimates and model fit warrant further examination of this approach at higher levels of skew. The current study used Monte Carlo simulations; 3000 observations were drawn from each of two subpopulations differing in the effect of X on Y. Five hundred simulations were performed in each of the 10 scenarios varying in levels of skew in one or both classes. Model comparison criteria supported the accurate two-class model, preserving the differential effects, while parameter estimates were notably biased. The appropriate number of effects can be captured with this approach but we suggest caution when interpreting the magnitude of the effects. 相似文献
92.
Recursive partitioning algorithms separate a feature space into a set of disjoint rectangles. Then, usually, a constant in every partition is fitted. While this is a simple and intuitive approach, it may still lack interpretability as to how a specific relationship between dependent and independent variables may look. Or it may be that a certain model is assumed or of interest and there is a number of candidate variables that may non-linearly give rise to different model parameter values. We present an approach that combines generalized linear models (GLM) with recursive partitioning that offers enhanced interpretability of classical trees as well as providing an explorative way to assess a candidate variable's influence on a parametric model. This method conducts recursive partitioning of a GLM by (1) fitting the model to the data set, (2) testing for parameter instability over a set of partitioning variables, (3) splitting the data set with respect to the variable associated with the highest instability. The outcome is a tree where each terminal node is associated with a GLM. We will show the method's versatility and suitability to gain additional insight into the relationship of dependent and independent variables by two examples, modelling voting behaviour and a failure model for debt amortization, and compare it to alternative approaches. 相似文献
93.
Thomas A. Louis 《The American statistician》2013,67(3)
The easily computed, one-sided confidence interval for the binomial parameter provides the basis for an interesting classroom example of scientific thinking and its relationship to confidence intervals. The upper limit can be represented as the sample proportion from a number of “successes” in a future experiment of the same sample size. The upper limit reported by most people corresponds closely to that producing a 95 percent classical confidence interval and has a Bayesian interpretation. 相似文献
94.
Stephen E. Fienberg 《The American statistician》2013,67(3):192-196
Bonferroni inequalities often provide tight upper and lower bounds for the probability of a union of events. The bounds are especially useful when this probability is difficult to compute exactly. There are situations, however, in which the Bonferroni approach gives very poor results. An example is given in which the upper and lower Bonferroni bounds are far from the probability they seek to approximate and successive bounds do not converge. Even an improved first upper Bonferroni bound may not be close to the probability of the union of events. 相似文献
95.
The growing popular realization that American product quality and productivity are no longer without challenge for world leadership presents an opportunity for the American statistical community to make stronger contributions to sound industrial practice than it has in the past. Management consultants, such as Deming and Juran, are promoting philosophies that contain strong statistical components and are being heard by top U.S. executives. There are thus growing opportunities for industrial statisticians. Upon reviewing the content of typical graduate-level statistical quality control courses and books in the light of the present situation, we find them to be inadequate and in some cases to suffer from inappropriate emphases. In this article we discuss our perceptions of what is needed in the way of a new graduate-level course in statistics for quality and productivity (SQP). We further offer for discussion a syllabus for such a course (which is a modification of one used at Iowa State in the 1983 spring semester), some comments on how specific topics might be approached, and also a partially annotated list of references for material that we believe belongs in a modern SQP course. 相似文献
96.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality. 相似文献
97.
Thomas R. Willemain Ali Allahverdi Philip Desautels Janine ldredge Ozden Gur Gregory Panos 《统计学通讯:模拟与计算》2013,42(4):1043-1075
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection. 相似文献
98.
In many applications, decisions are made on the basis of function of parameters g(θ). When the value of g(theta;) is calculated using estimated values for te parameters, its is important to have a measure of the uncertainty associated with that value of g(theta;). Likelihood ratio approaches to finding likelihood intervals for functions of parameters have been shown to be more reliable, in terms of coverage probability, than the linearization approach. Two approaches to the generalization of the profiling algorithm have been proposed in the literature to enable construction of likelihood intervals for a function of parameters (Chen and Jennrich, 1996; Bates and Watts, 1988). In this paper we show the equivalence of these two methods. We also provide and analysis of cases in which neither profiling algorithm is appropriate. For one of these cases an alternate approach is suggested Whereas generalized profiling is based on maximizing the likelihood function given a constraint on the value of g(θ), the alternative algorithm is based on optimizing g(θ) given a constraint on the value of the likelihood function. 相似文献
99.
For the analysis of survey-weighted categorical data, one recommended method of analysis is a log-rate model. For each cell in a contingency table, the survey weights are averaged across subjects and incorporated into an offset for a loglinear model. Supposedly, one can then proceed with the analysis of unweighted observed cell counts. We provide theoretical and simulation-based evidence to show that the log-rate analysis is not an effective statistical analysis method and should not be used in general. The root of the problem is in its failure to properly account for variability in the individual weights within cells of a contingency table. This results in goodness-of-fit tests that have higher-than-nominal error rates and confidence intervals for odds ratios that have lower-than-nominal coverage. 相似文献
100.
Generalized Laplacian distribution is considered. A new distribution called geometric generalized Laplacian distribution is introduced and its properties are studied. First- and higher-order autoregressive processes with these stationary marginal distributions are developed and studied. Simulation studies are conducted and trajectories of the process are obtained for selected values of the parameters. Various areas of application of these models are discussed. 相似文献