全文获取类型
收费全文 | 4735篇 |
免费 | 89篇 |
国内免费 | 24篇 |
专业分类
管理学 | 228篇 |
民族学 | 6篇 |
人口学 | 48篇 |
丛书文集 | 56篇 |
理论方法论 | 23篇 |
综合类 | 735篇 |
社会学 | 29篇 |
统计学 | 3723篇 |
出版年
2024年 | 1篇 |
2023年 | 25篇 |
2022年 | 43篇 |
2021年 | 34篇 |
2020年 | 83篇 |
2019年 | 160篇 |
2018年 | 186篇 |
2017年 | 296篇 |
2016年 | 150篇 |
2015年 | 106篇 |
2014年 | 151篇 |
2013年 | 1281篇 |
2012年 | 410篇 |
2011年 | 148篇 |
2010年 | 161篇 |
2009年 | 182篇 |
2008年 | 166篇 |
2007年 | 132篇 |
2006年 | 147篇 |
2005年 | 127篇 |
2004年 | 105篇 |
2003年 | 96篇 |
2002年 | 84篇 |
2001年 | 78篇 |
2000年 | 71篇 |
1999年 | 67篇 |
1998年 | 58篇 |
1997年 | 47篇 |
1996年 | 29篇 |
1995年 | 23篇 |
1994年 | 27篇 |
1993年 | 20篇 |
1992年 | 24篇 |
1991年 | 8篇 |
1990年 | 17篇 |
1989年 | 11篇 |
1988年 | 18篇 |
1987年 | 9篇 |
1986年 | 6篇 |
1985年 | 4篇 |
1984年 | 12篇 |
1983年 | 13篇 |
1982年 | 7篇 |
1981年 | 6篇 |
1980年 | 1篇 |
1979年 | 6篇 |
1978年 | 7篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有4848条查询结果,搜索用时 15 毫秒
61.
In this paper, within the framework of a Bayesian model, we consider the problem of sequentially estimating the intensity parameter of a homogeneous Poisson process with a linear exponential (LINEX) loss function and a fixed cost per unit time. An asymptotically pointwise optimal (APO) rule is proposed. It is shown to be asymptotically optimal for the arbitrary priors and asymptotically non-deficient for the conjugate priors in a similar sense of Bickel and Yahav [Asymptotically pointwise optimal procedures in sequential analysis, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley, CA, 1967, pp. 401–413; Asymptotically optimal Bayes and minimax procedures in sequential estimation, Ann. Math. Statist. 39 (1968), pp. 442–456] and Woodroofe [A.P.O. rules are asymptotically non-deficient for estimation with squared error loss, Z. Wahrsch. verw. Gebiete 58 (1981), pp. 331–341], respectively. The proposed APO rule is illustrated using a real data set. 相似文献
62.
Jin Zhang 《Statistics》2013,47(4):792-799
The Pareto distribution is an important distribution in statistics, which has been widely used in finance, physics, hydrology, geology, astronomy, and so on. Even though the parameter estimation for the Pareto distribution has been well established in the literature, the estimation problem for the truncated Pareto distribution becomes complex. This article investigates the bias and mean-squared error of the maximum-likelihood estimation for the truncated Pareto distribution, and some useful results are obtained. 相似文献
63.
The family of skew distributions introduced by Azzalini and extended by others has received widespread attention. However, it suffers from complicated inference procedures. In this paper, a new family of skew distributions that overcomes the difficulties is introduced. This new family belongs to the exponential family. Many properties of this family are studied, inference procedures developed and simulation studies performed to assess the procedures. Some particular cases of this family, evidence of its flexibility and a real data application are presented. At least 10 advantages of the new family over Azzalini's distributions are established. 相似文献
64.
This paper concludes our comprehensive study on point estimation of model parameters of a gamma distribution from a second-order decision theoretic point of view. It should be noted that efficient estimation of gamma model parameters for samples ‘not large’ is a challenging task since the exact sampling distributions of the maximum likelihood estimators and its variants are not known. Estimation of a gamma scale parameter has received less attention from the earlier researchers compared to shape parameter estimation. What we have observed here is that improved estimation of the shape parameter does not necessarily lead to improved scale estimation if a natural moment condition (which is also the maximum likelihood restriction) is satisfied. Therefore, this work deals with the gamma scale parameter estimation as a separate new problem, not as a by-product of the shape parameter estimation, and studies several estimators in terms of second-order risk. 相似文献
65.
Josef Kozák 《Statistics》2013,47(3):363-371
Working with the linear regression model (1.1) and having the extraneous information (1.2) about regression coefficients the problem exists how to build estimators (1.3) with the risk (1.4) which enable to utilize the known information in order to reduce their risk as compared with the risk (1.6) of the LSE (1.5). Solution of this problem is known for the positive definite matrix T, namely in form for estimators (1.8) and (1.10).First, it is shown that the proposed estimators (2.6),(2.9) and (2.16) based on psedoinversions of the matrix L represent the solution of the problem of the positive semidefinite matrix T=L'L.Further, the problem of interpretability of estimators in the sense of the inequality (3.1) exists; it is shown that all mentioned estimators are at least partially interpretable in the sense of requirements (3.2) or (3.10). 相似文献
66.
Abstract The multivariate elliptically contoured distributions provide a viable framework for modeling time-series data. It includes the multivariate normal, power exponential, t, and Cauchy distributions as special cases. For multivariate elliptically contoured autoregressive models, we derive the exact likelihood equations for the model parameters. They are closely related to the Yule-Walker equations and involve simple function of the data. The maximum likelihood estimators are obtained by alternately solving two linear systems and illustrated using the simulation data. 相似文献
67.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided. 相似文献
68.
Alireza Ghodsi 《统计学通讯:模拟与计算》2013,42(6):1256-1268
In this article, we implement the Regression Method for estimating (d 1, d 2) of the FISSAR(1, 1) model. It is also possible to estimate d 1 and d 2 by Whittle's method. We also compute the estimated bias, standard error, and root mean square error by a simulation study. A comparison was made between the Regression Method of estimating d 1 and d 2 to that of the Whittle's method. It was found in this simulation study that the Regression Method of estimation was better when compare with the Whittle's estimator, in the sense that it had smaller root mean square errors (RMSE) values. 相似文献
69.
Mauricio Sadinle 《统计学通讯:模拟与计算》2013,42(9):1909-1924
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method. 相似文献
70.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice. 相似文献