全文获取类型
收费全文 | 1983篇 |
免费 | 16篇 |
专业分类
管理学 | 276篇 |
民族学 | 6篇 |
人口学 | 171篇 |
丛书文集 | 9篇 |
理论方法论 | 197篇 |
综合类 | 52篇 |
社会学 | 778篇 |
统计学 | 510篇 |
出版年
2023年 | 12篇 |
2021年 | 18篇 |
2020年 | 25篇 |
2019年 | 46篇 |
2018年 | 48篇 |
2017年 | 63篇 |
2016年 | 36篇 |
2015年 | 38篇 |
2014年 | 47篇 |
2013年 | 358篇 |
2012年 | 76篇 |
2011年 | 57篇 |
2010年 | 45篇 |
2009年 | 47篇 |
2008年 | 35篇 |
2007年 | 39篇 |
2006年 | 44篇 |
2005年 | 29篇 |
2004年 | 29篇 |
2003年 | 28篇 |
2002年 | 27篇 |
2001年 | 43篇 |
2000年 | 50篇 |
1999年 | 60篇 |
1998年 | 43篇 |
1997年 | 32篇 |
1996年 | 25篇 |
1995年 | 35篇 |
1994年 | 33篇 |
1993年 | 31篇 |
1992年 | 28篇 |
1991年 | 28篇 |
1990年 | 28篇 |
1989年 | 19篇 |
1988年 | 26篇 |
1987年 | 33篇 |
1986年 | 25篇 |
1985年 | 29篇 |
1984年 | 34篇 |
1983年 | 22篇 |
1982年 | 24篇 |
1981年 | 19篇 |
1980年 | 18篇 |
1979年 | 16篇 |
1978年 | 17篇 |
1977年 | 15篇 |
1976年 | 15篇 |
1975年 | 12篇 |
1974年 | 12篇 |
1973年 | 13篇 |
排序方式: 共有1999条查询结果,搜索用时 15 毫秒
21.
22.
I.J. Good 《统计学通讯:理论与方法》2013,42(4):1225-1231
The theory in Part I contained an error that was inferred from the output of a program, written in SAS by Eric P. Smith and David D. Morris. The program produces random BUS designs in accordance with the algorithm of Part I. The theory is here corrected by using a combinatorial argument that involves elementary number theory. The algorithm needs no change but its interpretation is now adjusted. 相似文献
23.
Anis I. Kanjo 《统计学通讯:理论与方法》2013,42(3):787-795
Let F(x) be a life distribution. An exact test is given for testing H0 F is exponential, versusH1Fε NBUE (NWUE); along with a table of critical values for n=5(l)80, and n=80(5)65. An asymptotic test is made available for large values of n, where the standardized normal table can be used for testing. 相似文献
24.
A 2 2 2 contingency table can often be analysed in an exact fashion by using Fisher's exact test and in an approximate fashion by using the chi-squared test with Yates' continuity correction, and it is traditionally held that the approximation is valid when the minimum expected quantity E is E S 5. Unfortunately, little research has been carried out into this belief, other than that it is necessary to establish a bound E>E*, that the condition E S 5 may not be the most appropriate (Martín Andrés et al., 1992) and that E* is not a constant, but usually increasing with the growth of the sample size (Martín Andrés & Herranz Tejedor, 1997). In this paper, the authors conduct a theoretical experimental study from which they ascertain that E* value (which is very variable and frequently quite a lot greater than 5) is strongly related to the magnitude of the skewness of the underlying hypergeometric distribution, and that bounding the skewness is equivalent to bounding E (which is the best control procedure). The study enables estimating the expression for the above-mentioned E* (which in turn depends on the number of tails in the test, the alpha error used, the total sample size, and the minimum marginal imbalance) to be estimated. Also the authors show that E* increases generally with the sample size and with the marginal imbalance, although it does reach a maximum. Some general and very conservative validity conditions are E S 35.53 (one-tailed test) and E S 7.45 (two-tailed test) for alpha nominal errors in 1% h f h 10%. The traditional condition E S 5 is only valid when the samples are small and one of the marginals is very balanced; alternatively, the condition E S 5.5 is valid for small samples or a very balanced marginal. Finally, it is proved that the chi-squared test is always valid in tables where both marginals are balanced, and that the maximum skewness permitted is related to the maximum value of the bound E*, to its value for tables with at least one balanced marginal and to the minimum value that those marginals must have (in non-balanced tables) for the chi-squared test to be valid. 相似文献
25.
Classical nondecimated wavelet transforms are attractive for many applications. When the data comes from complex or irregular
designs, the use of second generation wavelets in nonparametric regression has proved superior to that of classical wavelets.
However, the construction of a nondecimated second generation wavelet transform is not obvious. In this paper we propose a
new ‘nondecimated’ lifting transform, based on the lifting algorithm which removes one coefficient at a time, and explore
its behavior. Our approach also allows for embedding adaptivity in the transform, i.e. wavelet functions can be constructed
such that their smoothness adjusts to the local properties of the signal. We address the problem of nonparametric regression
and propose an (averaged) estimator obtained by using our nondecimated lifting technique teamed with empirical Bayes shrinkage.
Simulations show that our proposed method has higher performance than competing techniques able to work on irregular data.
Our construction also opens avenues for generating a ‘best’ representation, which we shall explore. 相似文献
26.
In this article, we introduce three new distribution-free Shewhart-type control charts that exploit run and Wilcoxon-type rank-sum statistics to detect possible shifts of a monitored process. Exact formulae for the alarm rate, the run length distribution, and the average run length (ARL) are all derived. A key advantage of these charts is that, due to their nonparametric nature, the false alarm rate (FAR) and in-control run length distribution is the same for all continuous process distributions. Tables are provided for the implementation of the charts for some typical FAR values. Furthermore, a numerical study carried out reveals that the new charts are quite flexible and efficient in detecting shifts to Lehmann-type out-of-control situations. 相似文献
27.
Diagnostic checks for discrete data regression models using posterior predictive simulations 总被引:3,自引:0,他引:3
A. Gelman Y. Goegebeur F. Tuerlinckx & I. Van Mechelen 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(2):247-268
Model checking with discrete data regressions can be difficult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodness-of-fit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fitted to a historical data set on behavioural learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and specific discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are sufficiently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals defined from an underlying continuous model and quantile–quantile plots of these residuals. 相似文献
28.
Binary probability maps using a hidden conditional autoregressive Gaussian process with an application to Finnish common toad data 总被引:3,自引:0,他引:3
I. S. Weir & A. N. Pettitt 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(4):473-484
The Finnish common toad data of Heikkinen and Hogmander are reanalysed using an alternative fully Bayesian model that does not require a pseudolikelihood approximation and an alternative prior distribution for the true presence or absence status of toads in each 10 km×10 km square. Markov chain Monte Carlo methods are used to obtain posterior probability estimates of the square-specific presences of the common toad and these are presented as a map. The results are different from those of Heikkinen and Hogmander and we offer an explanation in terms of the prior used for square-specific presence of the toads. We suggest that our approach is more faithful to the data and avoids unnecessary confounding of effects. We demonstrate how to extend our model efficiently with square-specific covariates and illustrate this by introducing deterministic spatial changes. 相似文献
29.
M. Jamshidian & R. I. Jennrich 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):257-270
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. 相似文献
30.
Philip B. Whyman Mark J. Baimbridge Babatunde A. Buraimo Alina I. Petrescu 《英国管理杂志》2015,26(3):347-364
This paper investigates the relationship between workplace flexibility practices (WFPs) and corporate performance using data from the British Workplace Employment Relations Survey 2004. Disaggregating WFPs into numerical, functional and cost aspects enables the analysis of their relationships to an objective measure of corporate performance, namely workplace financial turnover. Furthermore separate analyses are presented for different types of workplace: differentiated by workforce size; ownership; age; wage level; and unionization. Results show that different types of workplaces need to pay attention to the mix of WFPs they adopt. We find that certain cost WFPs (profit‐related pay, merit pay and payment‐by‐results) have strong positive relationships with corporate performance. However, training delivers mixed corporate performance results, while the extent of job autonomy and the proportion of part‐time employees in a workplace have an inverse association with corporate performance. Given the limited existing research examining disaggregated measures of WFPs and objectively measured corporate performance, this paper offers useful insights for firms, policy makers and the overall economy. 相似文献