全文获取类型
收费全文 | 3837篇 |
免费 | 102篇 |
国内免费 | 13篇 |
专业分类
管理学 | 180篇 |
民族学 | 1篇 |
人口学 | 37篇 |
丛书文集 | 21篇 |
理论方法论 | 18篇 |
综合类 | 325篇 |
社会学 | 25篇 |
统计学 | 3345篇 |
出版年
2024年 | 1篇 |
2023年 | 21篇 |
2022年 | 31篇 |
2021年 | 23篇 |
2020年 | 67篇 |
2019年 | 145篇 |
2018年 | 161篇 |
2017年 | 266篇 |
2016年 | 124篇 |
2015年 | 79篇 |
2014年 | 111篇 |
2013年 | 1146篇 |
2012年 | 344篇 |
2011年 | 96篇 |
2010年 | 115篇 |
2009年 | 131篇 |
2008年 | 117篇 |
2007年 | 90篇 |
2006年 | 91篇 |
2005年 | 88篇 |
2004年 | 74篇 |
2003年 | 60篇 |
2002年 | 68篇 |
2001年 | 61篇 |
2000年 | 57篇 |
1999年 | 59篇 |
1998年 | 53篇 |
1997年 | 42篇 |
1996年 | 23篇 |
1995年 | 20篇 |
1994年 | 26篇 |
1993年 | 19篇 |
1992年 | 23篇 |
1991年 | 8篇 |
1990年 | 15篇 |
1989年 | 9篇 |
1988年 | 17篇 |
1987年 | 8篇 |
1986年 | 6篇 |
1985年 | 4篇 |
1984年 | 12篇 |
1983年 | 13篇 |
1982年 | 6篇 |
1981年 | 5篇 |
1980年 | 1篇 |
1979年 | 6篇 |
1978年 | 5篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有3952条查询结果,搜索用时 46 毫秒
91.
Muhammad Nouman Qureshi Cem Kadilar Muhammad Noor Ul Amin Muhammad Hanif 《Journal of Statistical Computation and Simulation》2018,88(14):2761-2774
The use of robust measures helps to increase the precision of the estimators, especially for the estimation of extremely skewed distributions. In this article, a generalized ratio estimator is proposed by using some robust measures with single auxiliary variable under the adaptive cluster sampling (ACS) design. We have incorporated tri-mean (TM), mid-range (MR) and Hodges-Lehman (HL) of the auxiliary variable as robust measures together with some conventional measures. The expressions of bias and mean square error (MSE) of the proposed generalized ratio estimator are derived. Two types of numerical study have been conducted using artificial clustered population and real data application to examine the performance of the proposed estimator over the usual mean per unit estimator under simple random sampling (SRS). Related results of the simulation study show that the proposed estimators provide better estimation results on both real and artificial population over the competing estimators. 相似文献
92.
In this paper, we present an algorithm for clustering based on univariate kernel density estimation, named ClusterKDE. It consists of an iterative procedure that in each step a new cluster is obtained by minimizing a smooth kernel function. Although in our applications we have used the univariate Gaussian kernel, any smooth kernel function can be used. The proposed algorithm has the advantage of not requiring a priori the number of cluster. Furthermore, the ClusterKDE algorithm is very simple, easy to implement, well-defined and stops in a finite number of steps, namely, it always converges independently of the initial point. We also illustrate our findings by numerical experiments which are obtained when our algorithm is implemented in the software Matlab and applied to practical applications. The results indicate that the ClusterKDE algorithm is competitive and fast when compared with the well-known Clusterdata and K-means algorithms, used by Matlab to clustering data. 相似文献
93.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates. 相似文献
94.
95.
K. C. Siju 《Journal of Statistical Computation and Simulation》2018,88(9):1717-1748
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data. 相似文献
96.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks. 相似文献
97.
Tobias A. Möller 《随机性模型》2016,32(1):77-98
In this article, an integer-valued self-exciting threshold model with a finite range based on the binomial INARCH(1) model is proposed. Important stochastic properties are derived, and approaches for parameter estimation are discussed. A real-data example about the regional spread of public drunkenness in Pittsburgh demonstrates the applicability of the new model in comparison to existing models. Feasible modifications of the model are presented, which are designed to handle special features such as zero-inflation. 相似文献
98.
《Journal of Statistical Computation and Simulation》2012,82(12):1145-1161
The Perron test which is based on a Dickey–Fuller test regression is a commonly employed approach to test for a unit root in the presence of a structural break of unknown timing. In the case of an innovational outlier (IO), the Perron test tends to exhibit spurious rejections in finite samples when the break occurs under the null hypothesis. In the present paper, a new Perron-type IO unit root test is developed. It is shown in Monte Carlo experiments that the new test does not over-reject the null hypothesis. Even for the case of a level and slope break for trending data, the empirical size is near its nominal level. The test distribution equals the case of a known break date. Furthermore, the test is able to identify the true break date very accurately even for small breaks. As an application serves the Nelson–Plosser data set. 相似文献
99.
《Journal of Statistical Computation and Simulation》2012,82(3-4):259-267
The robustness of an extended version of Colton's decision theoretic model is considered. The extended version includes the losses due to the patients who are not entered in the experiment, but require treatment while the experiment is in progress. Among the topics considered are the effects of risk of using a sample size considerably less than the optimum, use of an incorrect patient horizon, application of a modified loss function, and use of a two point prior distribution. It is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density. 相似文献
100.
Tatiana Komarova 《统计学通讯:理论与方法》2017,46(10):4915-4931
This article considers the non parametric estimation of absolutely continuous distribution functions of independent lifetimes of non identical components in k-out-of-n systems, 2 ? k ? n, from the observed “autopsy” data. In economics, ascending “button” or “clock” auctions with n heterogeneous bidders with independent private values present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions, the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. This article considers the sieve spaces of Bernstein polynomials which allow to easily implement constraints on the monotonicity of estimated distribution functions. 相似文献