全文获取类型
收费全文 | 4620篇 |
免费 | 137篇 |
国内免费 | 17篇 |
专业分类
管理学 | 259篇 |
民族学 | 2篇 |
人口学 | 59篇 |
丛书文集 | 51篇 |
理论方法论 | 82篇 |
综合类 | 413篇 |
社会学 | 153篇 |
统计学 | 3755篇 |
出版年
2024年 | 2篇 |
2023年 | 35篇 |
2022年 | 35篇 |
2021年 | 35篇 |
2020年 | 104篇 |
2019年 | 184篇 |
2018年 | 204篇 |
2017年 | 311篇 |
2016年 | 157篇 |
2015年 | 96篇 |
2014年 | 132篇 |
2013年 | 1329篇 |
2012年 | 412篇 |
2011年 | 129篇 |
2010年 | 143篇 |
2009年 | 154篇 |
2008年 | 146篇 |
2007年 | 110篇 |
2006年 | 112篇 |
2005年 | 113篇 |
2004年 | 96篇 |
2003年 | 76篇 |
2002年 | 79篇 |
2001年 | 79篇 |
2000年 | 66篇 |
1999年 | 68篇 |
1998年 | 63篇 |
1997年 | 46篇 |
1996年 | 26篇 |
1995年 | 22篇 |
1994年 | 28篇 |
1993年 | 19篇 |
1992年 | 23篇 |
1991年 | 9篇 |
1990年 | 18篇 |
1989年 | 10篇 |
1988年 | 20篇 |
1987年 | 10篇 |
1986年 | 6篇 |
1985年 | 5篇 |
1984年 | 12篇 |
1983年 | 15篇 |
1982年 | 7篇 |
1981年 | 7篇 |
1980年 | 3篇 |
1979年 | 8篇 |
1978年 | 5篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有4774条查询结果,搜索用时 15 毫秒
641.
The Hidden semi-Markov models (HSMMs) were introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs were proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi-Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this article, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms. 相似文献
642.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice. 相似文献
643.
Mauricio Sadinle 《统计学通讯:模拟与计算》2013,42(9):1909-1924
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method. 相似文献
644.
Alireza Ghodsi 《统计学通讯:模拟与计算》2013,42(6):1256-1268
In this article, we implement the Regression Method for estimating (d 1, d 2) of the FISSAR(1, 1) model. It is also possible to estimate d 1 and d 2 by Whittle's method. We also compute the estimated bias, standard error, and root mean square error by a simulation study. A comparison was made between the Regression Method of estimating d 1 and d 2 to that of the Whittle's method. It was found in this simulation study that the Regression Method of estimation was better when compare with the Whittle's estimator, in the sense that it had smaller root mean square errors (RMSE) values. 相似文献
645.
Qin Wang 《统计学通讯:模拟与计算》2013,42(10):1868-1876
Sliced regression is an effective dimension reduction method by replacing the original high-dimensional predictors with its appropriate low-dimensional projection. It is free from any probabilistic assumption and can exhaustively estimate the central subspace. In this article, we propose to incorporate shrinkage estimation into sliced regression so that variable selection can be achieved simultaneously with dimension reduction. The new method can improve the estimation accuracy and achieve better interpretability for the reduced variables. The efficacy of proposed method is shown through both simulation and real data analysis. 相似文献
646.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
647.
A novel approach based on the concepts of a generalized pivotal quantity (GPQ) is developed to construct confidence intervals for the mediated effect. Thereafter, its performance is compared with six interval estimation approaches in terms of empirical coverage probability and expected length via simulation and two real examples. The results show that the GPQ-based and bootstrap percentile methods outperform other methods when mediated effects exist in small and medium samples. Moreover, the GPQ-based method exhibits a more stable performance in small and non-normal samples. A discussion on how to choose the best interval estimation method for mediated effects is presented. 相似文献
648.
The efficiency of the penalized methods (Fan and Li, 2001) depends strongly on a tuning parameter due to the fact that it controls the extent of penalization. Therefore, it is important to select it appropriately. In general, tuning parameters are chosen by data-driven approaches, such as the commonly used generalized cross validation. In this article, we propose an alternative method for the derivation of the tuning parameter selector in penalized least squares framework, which can lead to an ameliorated estimate. Simulation studies are presented to support theoretical findings and a comparison of the Type I and Type II error rates, considering the L 1, the hard thresholding and the Smoothly Clipped Absolute Deviation penalty functions, is performed. The results are given in tables and discussion follows. 相似文献
649.
Daniele De Martini 《统计学通讯:模拟与计算》2013,42(9):1263-1277
A study on the robustness of the adaptation of the sample size for a phase III trial on the basis of existing phase II data is presented—when phase III is lower than phase II effect size. A criterion of clinical relevance for phase II results is applied in order to launch phase III, where data from phase II cannot be included in statistical analysis. The adaptation consists in adopting the conservative approach to sample size estimation, which takes into account the variability of phase II data. Some conservative sample size estimation strategies, Bayesian and frequentist, are compared with the calibrated optimal γ conservative strategy (viz. COS) which is the best performer when phase II and phase III effect sizes are equal. The Overall Power (OP) of these strategies and the mean square error (MSE) of their sample size estimators are computed under different scenarios, in the presence of the structural bias due to lower phase III effect size, for evaluating the robustness of the strategies. When the structural bias is quite small (i.e., the ratio of phase III to phase II effect size is greater than 0.8), and when some operating conditions for applying sample size estimation hold, COS can still provide acceptable results for planning phase III trials, even if in bias absence the OP was higher. Main results concern the introduction of a correction, which affects just sample size estimates and not launch probabilities, for balancing the structural bias. In particular, the correction is based on a postulation of the structural bias; hence, it is more intuitive and easier to use than those based on the modification of Type I or/and Type II errors. A comparison of corrected conservative sample size estimation strategies is performed in the presence of a quite small bias. When the postulated correction is right, COS provides good OP and the lowest MSE. Moreover, the OPs of COS are even higher than those observed without bias, thanks to higher launch probability and a similar estimation performance. The structural bias can therefore be exploited for improving sample size estimation performances. When the postulated correction is smaller than necessary, COS is still the best performer, and it also works well. A higher than necessary correction should be avoided. 相似文献
650.
Stationary long memory processes have been extensively studied over the past decades. When we deal with financial, economic, or environmental data, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists. To take into account this phenomenon, we propose a new class of stochastic processes: locally stationary k-factor Gegenbauer process. We present a procedure to estimate consistently the time-varying parameters by applying discrete wavelet packet transform. The robustness of the algorithm is investigated through a simulation study. And we apply our methods on Nikkei Stock Average 225 (NSA 225) index series. 相似文献