全文获取类型
收费全文 | 1584篇 |
免费 | 26篇 |
国内免费 | 3篇 |
专业分类
管理学 | 63篇 |
民族学 | 1篇 |
人口学 | 17篇 |
丛书文集 | 23篇 |
理论方法论 | 16篇 |
综合类 | 212篇 |
社会学 | 25篇 |
统计学 | 1256篇 |
出版年
2024年 | 1篇 |
2023年 | 4篇 |
2022年 | 5篇 |
2021年 | 13篇 |
2020年 | 20篇 |
2019年 | 48篇 |
2018年 | 67篇 |
2017年 | 138篇 |
2016年 | 40篇 |
2015年 | 48篇 |
2014年 | 55篇 |
2013年 | 452篇 |
2012年 | 97篇 |
2011年 | 72篇 |
2010年 | 50篇 |
2009年 | 51篇 |
2008年 | 59篇 |
2007年 | 52篇 |
2006年 | 45篇 |
2005年 | 47篇 |
2004年 | 34篇 |
2003年 | 35篇 |
2002年 | 19篇 |
2001年 | 27篇 |
2000年 | 19篇 |
1999年 | 16篇 |
1998年 | 21篇 |
1997年 | 15篇 |
1996年 | 8篇 |
1995年 | 8篇 |
1994年 | 6篇 |
1993年 | 5篇 |
1992年 | 2篇 |
1991年 | 7篇 |
1990年 | 2篇 |
1989年 | 3篇 |
1987年 | 2篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1983年 | 5篇 |
1982年 | 3篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有1613条查询结果,搜索用时 15 毫秒
901.
S. N. Wood 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):413-428
Penalized likelihood methods provide a range of practical modelling tools, including spline smoothing, generalized additive models and variants of ridge regression. Selecting the correct weights for penalties is a critical part of using these methods and in the single-penalty case the analyst has several well-founded techniques to choose from. However, many modelling problems suggest a formulation employing multiple penalties, and here general methodology is lacking. A wide family of models with multiple penalties can be fitted to data by iterative solution of the generalized ridge regression problem minimize || W 1/2 ( Xp − y ) ||2 ρ+Σ i =1 m θ i p ' S i p ( p is a parameter vector, X a design matrix, S i a non-negative definite coefficient matrix defining the i th penalty with associated smoothing parameter θ i , W a diagonal weight matrix, y a vector of data or pseudodata and ρ an 'overall' smoothing parameter included for computational efficiency). This paper shows how smoothing parameter selection can be performed efficiently by applying generalized cross-validation to this problem and how this allows non-linear, generalized linear and linear models to be fitted using multiple penalties, substantially increasing the scope of penalized modelling methods. Examples of non-linear modelling, generalized additive modelling and anisotropic smoothing are given. 相似文献
902.
Yongmiao Hong 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(3):557-574
Two tests for serial dependence are proposed using a generalized spectral theory in combination with the empirical distribution function. The tests are generalizations of the Cramér-von Mises and Kolmogorov-Smirnov tests based on the standardized spectral distribution function. They do not involve the choice of a lag order, and they are consistent against all types of pairwise serial dependence, including those with zero autocorrelation. They also require no moment condition and are distribution free under serial independence. A simulation study compares the finite sample performances of the new tests and some closely related tests. The asymptotic distribution theory works well in finite samples. The generalized Cramér-von Mises test has good power against a variety of dependent alternatives and dominates the generalized Kolmogorov-Smirnov test. A local power analysis explains some important stylized facts on the power of the tests based on the empirical distribution function. 相似文献
903.
Kaatje Bollaerts Marc Aerts Stefaan Ribbens Yves Van der Stede Ides Boone Koen Mintiens 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2008,171(2):449-464
Summary. Consumption of pork that is contaminated with Salmonella is an important source of human salmonellosis world wide. To control and prevent salmonellosis, Belgian pig-herds with high Salmonella infection burden are encouraged to take part in a control programme supporting the implementation of control measures. The Belgian government decided that only the 10% of pig-herds with the highest Salmonella infection burden (denoted high risk herds) can participate. To identify these herds, serological data reported as sample-to-positive ratios (SP-ratios) are collected. However, SP-ratios have an extremely skewed distribution and are heavily subject to confounding seasonal and animal age effects. Therefore, we propose to identify the 10% high risk herds by using semiparametric quantile regression with P -splines. In particular, quantile curves of animal SP-ratios are estimated as a function of sampling time and animal age. Then, pigs are classified into low and high risk animals with high risk animals having an SP-ratio that is larger than the corresponding estimated upper quantile. Finally, for each herd, the number of high risk animals is calculated as well as the beta–binomial p -value reflecting the hypothesis that the Salmonella infection burden is higher in that herd compared with the other herds. The 10% pig-herds with the lowest p -values are then identified as high risk herds. In addition, since high risk herds are supported to implement control measures, a risk factor analysis is conducted by using binomial generalized linear mixed models to investigate factors that are associated with decreased or increased Salmonella infection burden. Finally, since the choice of a specific upper quantile is to a certain extent arbitrary, a sensitivity analysis is conducted comparing different choices of upper quantiles. 相似文献
904.
Maximum likelihood estimates (MLEs) for logistic regression coefficients are known to be biased in finite samples and consequently may produce misleading inferences. Bias adjusted estimates can be calculated using the first-order asymptotic bias derived from a Taylor series expansion of the log likelihood. Jackknifing can also be used to obtain bias corrected estimates, but the approach is computationally intensive, requiring an additional series of iterations (steps) for each observation in the dataset.Although the one-step jackknife has been shown to be useful in logistic regression diagnostics and i the estimation of classification error rates, it does not effectively reduce bias. The two-step jackknife, however, can reduce computation in moderate-sized samples, provide estimates of dispersion and classification error, and appears to be effective in bias reduction. Another alternative, a two-step closed-form approximation, is found to be similar to the Taylo series method in certain circumstances. Monte Carlo simulations indicate that all the procedures, but particularly the multi-step jackknife, may tend to over-correct in very small samples. Comparison of the various bias correction proceduresin an example from the medical literature illustrates that bias correction can have a considerable impact on inference 相似文献
905.
We consider some methods of semiparametric regression estimation in multivariate models when the common distribution function is represented using a copula and the marginals satisfy a generalized regression model using a transfer functional. Sufficient conditions for consistency and joint asymptotic normality of the finite-dimensional parameters are obtained. 相似文献
906.
Huixiu Zhao 《统计学通讯:理论与方法》2013,42(5):594-606
For the exchangeable binary data with random cluster sizes, we use a pairwise likelihood procedure to give a set of approximately optimal unbiased estimating equations for estimating the mean and variance parameters. Theoretical results are obtained establishing the large sample properties of the solutions to the estimating equations. An application to a developmental toxicity study is given. Simulation results show that the pairwise likelihood procedure is valid and performs better than the GEE procedure for the exchangeable binary data. 相似文献
907.
In this article, we construct an improved procedure for estimating the process capability index C pmk . We propose a new C pmk lower-bound approach based on the GCI concept, and compare it with other existing methods. Based on the comparison results, we conclude with a recommendation, and construct a step-by-step procedure for the recommended approach to estimate the actual process capability C pmk for various sample sizes. The lower bound attended by our recommended approach, indeed, improves other existing lower bound methods. We also investigate a real-world application to illustrate how we could apply the recommended approach to the actual manufacturing processes. 相似文献
908.
In a nonlinear regression model based on a regularization method, selection of appropriate regularization parameters is crucial. Information criteria such as generalized information criterion (GIC) and generalized Bayesian information criterion (GBIC) are useful for selecting the optimal regularization parameters. However, the optimal parameter is often determined by calculating information criterion for all candidate regularization parameters, and so the computational cost is high. One simple method by which to accomplish this is to regard GIC or GBIC as a function of the regularization parameters and to find a value minimizing GIC or GBIC. However, it is unclear how to solve the optimization problem. In the present article, we propose an efficient Newton–Raphson type iterative method for selecting optimal regularization parameters with respect to GIC or GBIC in a nonlinear regression model based on basis expansions. This method reduces the computational time remarkably compared to the grid search and can select more suitable regularization parameters. The effectiveness of the method is illustrated through real data examples. 相似文献
909.
This article presents a new procedure for testing homogeneity of scale parameters from k independent inverse Gaussian populations. Based on the idea of generalized likelihood ratio method, a new generalized p-value is derived. Some simulation results are presented to compare the performance of the proposed method and existing methods. Numerical results show that the proposed test has good size and power performance. 相似文献
910.
In this article, we consider the exact computation of the famous halfspace depth (HD) and regression depth (RD) from the view of cutting a convex cone with hyperplanes. Two new algorithms are proposed for computing these two notions of depth. The first one is relatively straightforward but quite inefficient, whereas the second one is much faster. It is noteworthy that both of them can be implemented to spaces with dimension beyond three. Some numerical examples are also provided in what follows to illustrate the performances. 相似文献