全文获取类型
收费全文 | 3353篇 |
免费 | 80篇 |
国内免费 | 10篇 |
专业分类
管理学 | 173篇 |
民族学 | 5篇 |
人才学 | 1篇 |
人口学 | 39篇 |
丛书文集 | 88篇 |
理论方法论 | 30篇 |
综合类 | 942篇 |
社会学 | 42篇 |
统计学 | 2123篇 |
出版年
2024年 | 3篇 |
2023年 | 9篇 |
2022年 | 26篇 |
2021年 | 18篇 |
2020年 | 54篇 |
2019年 | 79篇 |
2018年 | 104篇 |
2017年 | 163篇 |
2016年 | 78篇 |
2015年 | 85篇 |
2014年 | 121篇 |
2013年 | 862篇 |
2012年 | 258篇 |
2011年 | 125篇 |
2010年 | 157篇 |
2009年 | 121篇 |
2008年 | 134篇 |
2007年 | 119篇 |
2006年 | 136篇 |
2005年 | 111篇 |
2004年 | 91篇 |
2003年 | 92篇 |
2002年 | 66篇 |
2001年 | 75篇 |
2000年 | 68篇 |
1999年 | 52篇 |
1998年 | 37篇 |
1997年 | 35篇 |
1996年 | 31篇 |
1995年 | 25篇 |
1994年 | 12篇 |
1993年 | 15篇 |
1992年 | 17篇 |
1991年 | 7篇 |
1990年 | 6篇 |
1989年 | 7篇 |
1988年 | 8篇 |
1987年 | 1篇 |
1986年 | 5篇 |
1985年 | 4篇 |
1984年 | 1篇 |
1983年 | 5篇 |
1982年 | 6篇 |
1981年 | 4篇 |
1980年 | 4篇 |
1979年 | 3篇 |
1978年 | 3篇 |
排序方式: 共有3443条查询结果,搜索用时 656 毫秒
181.
Consider the linear regression model Y = Xθ+ ε where Y denotes a vector of n observations on the dependent variable, X is a known matrix, θ is a vector of parameters to be estimated and e is a random vector of uncorrelated errors. If X'X is nearly singular, that is if the smallest characteristic root of X'X s small then a small perurbation in the elements of X, such as due to measurement errors, induces considerable variation in the least squares estimate of θ. In this paper we examine for the asymptotic case when n is large the effect of perturbation with regard to the bias and mean squared error of the estimate. 相似文献
182.
Nazir Ahmed Chaudhry 《统计学通讯:理论与方法》2013,42(9):3283-3313
Using the idea of impirical influence function, Hinkley (1977), the weighted jackknife technique is extended to ratio estimation. A weighted jackknife variance estimator for the ratio estimator is developed. Using the prediction theory approach, the properties of the weighted jackknifed variance estimator are examined. The implications of the failures of regression model on the behaviour of the weighted jackknifed variance estimator, for ratio estimation, are also studied. 相似文献
183.
Wei‐Min Shen 《Risk analysis》2011,31(5):745-757
Human error is one of the significant factors contributing to accidents. Traditional human error probability (HEP) studies based on fuzzy number concepts are one of the contributions addressing such a problem. It is particularly useful under circumstances where the lack of data exists. However, the degree of the discriminability of such studies may be questioned when applied under circumstances where experts have adequate information and specific values can be determined in the abscissa of the membership function of linguistic terms, that is, the fuzzy data of each scenario considered are close to each other. In this article, a novel HEP assessment aimed at solving such a difficulty is proposed. Under the framework, the fuzzy data are equipped with linguistic terms and membership values. By establishing a rule base for data combination, followed by the defuzzification and HEP transformation processes, the HEP results can be acquired. The methodology is first examined using a test case consisting of three different scenarios of which the fuzzy data are close to each other. The results generated are compared with the outcomes produced from the traditional fuzzy HEP studies using the same test case. It is concluded that the methodology proposed in this study has a higher degree of the discriminability and is capable of providing more reasonable results. Furthermore, in situations where the lack of data exists, the proposed approach is also capable of providing the range of the HEP results based on different risk viewpoints arbitrarily established as illustrated using a real‐world example. 相似文献
184.
Michael Lavine 《统计学通讯:模拟与计算》2013,42(1):269-283
Cook (1986) presented the idea of local influence to study the sensitivity of inferences to model assumptions:introduce a vector δ of perturbations to the model; choose a discrepancy function D to measure differences between the original inference and the inference under the perturbed model; study the behavior of D near δ = 0, the original model, usually by taking derivatives. Johnson and Geisser (1983) measure influence in Bayesian inference by the Kullback-Leibler divergence between predictive distributions. I~IcCulloch (1989) is a synthesis of Cook and Johnson and Geisser, using Kullback-Leibler divergence between posterior or predictive distributions as the discrepancy function in Bayesian local influence analyses. We analyze a special case for which McCulloch gives the general theory; namely, the linear model with conjugate prior. We present specific formulae for local influence measures for 1) changes in the parameters of the gamma prior for the precision, 2) changes in the mean of the normal prior for the regression coefficients, 3) changes in the covariance matrix of the normal prior for the regression coefficients and 4) changes in the case weights. Our method is an easy way to find locally influential subsets of points without knowing in advance the sizes of the subsets. The techniques are illustrated with a regression example. 相似文献
185.
Alberto Luceño 《统计学通讯:模拟与计算》2013,42(1):235-245
A well-know process capability index is slightly modified in this article to provide a new measure of process capability which takes account of the process location and variability, and for which point estimator and confidence intervals do exist that are insensitive to departures from the assumption of normal variability. Two examples of applications based on real data are presented. 相似文献
186.
This paper considers a likelihood ratio test for testing hypotheses defined by non-oblique closed convex cones, satisfying the so called iteration projection property, in a set of k normal means. We obtain the critical values of the test using the Chi-Bar-Squared distribution. The obtuse cones are introduced as a particular class of cones which are non-oblique with every one of their faces. Examples with the simple tree order cone and the total order cone are given to illustrate the results. 相似文献
187.
Ghazi Shukur 《统计学通讯:模拟与计算》2013,42(2):419-448
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed. 相似文献
188.
189.
Ole Klungsøyr Joe Sexton Inger Sandanger Jan F. Nygård 《Journal of applied statistics》2013,40(4):843-861
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found. 相似文献
190.
《Journal of Statistical Computation and Simulation》2012,82(12):1939-1969
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models. 相似文献