全文获取类型
收费全文 | 594篇 |
免费 | 10篇 |
专业分类
管理学 | 199篇 |
民族学 | 1篇 |
人才学 | 1篇 |
人口学 | 11篇 |
丛书文集 | 6篇 |
理论方法论 | 4篇 |
综合类 | 38篇 |
社会学 | 20篇 |
统计学 | 324篇 |
出版年
2024年 | 1篇 |
2023年 | 2篇 |
2021年 | 5篇 |
2020年 | 2篇 |
2019年 | 8篇 |
2018年 | 14篇 |
2017年 | 53篇 |
2016年 | 17篇 |
2015年 | 15篇 |
2014年 | 11篇 |
2013年 | 180篇 |
2012年 | 44篇 |
2011年 | 12篇 |
2010年 | 15篇 |
2009年 | 20篇 |
2008年 | 14篇 |
2007年 | 13篇 |
2006年 | 8篇 |
2005年 | 8篇 |
2004年 | 8篇 |
2003年 | 8篇 |
2002年 | 12篇 |
2001年 | 6篇 |
2000年 | 12篇 |
1999年 | 9篇 |
1998年 | 10篇 |
1997年 | 5篇 |
1996年 | 8篇 |
1995年 | 8篇 |
1994年 | 3篇 |
1993年 | 11篇 |
1992年 | 16篇 |
1991年 | 5篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 8篇 |
1987年 | 3篇 |
1986年 | 4篇 |
1985年 | 2篇 |
1984年 | 4篇 |
1983年 | 5篇 |
1982年 | 4篇 |
1981年 | 4篇 |
1980年 | 1篇 |
排序方式: 共有604条查询结果,搜索用时 203 毫秒
1.
This article proposes several estimators for estimating the ridge parameter k based on Poisson ridge regression (RR) model. These estimators have been evaluated by means of Monte Carlo simulations. As performance criteria, we have calculated the mean squared error (MSE), the mean value, and the standard deviation of k. The first criterion is commonly used, while the other two have never been used when analyzing Poisson RR. However, these performance criteria are very informative because, if several estimators have an equal estimated MSE, then those with low average value and standard deviation of k should be preferred. Based on the simulated results, we may recommend some biasing parameters that may be useful for the practitioners in the field of health, social, and physical sciences. 相似文献
2.
This paper is concerned with joint tests of non-nested models and simultaneous departures from homoskedasticity, serial independence and normality of the disturbance terms. Locally equivalent alternative models are used to construct joint tests since they provide a convenient way to incorporate more than one type of departure from the classical conditions. The joint tests represent a simple asymptotic solution to the “pre-testing” problem in the context of non-nested linear regression models. Our simulation results indicate that the proposed tests have good finite sample properties. 相似文献
3.
This paper proposes an overlapping-based test statistic for testing the equality of two exponential distributions with different scale and location parameters. The test statistic is defined as the maximum likelihood estimate of the Weitzman's overlapping coefficient, which estimates the agreement of two densities. The proposed test statistic is derived in closed form. Simulated critical points are generated for the proposed test statistic for various sample sizes and significance levels via Monte Carlo Simulations. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing Log likelihood ratio test. 相似文献
4.
While neoclassical economic theory sheds insight into the way that audit rates and penalty rates interact when individuals decide to declare income for taxation, it predicts far lower levels of compliance than observed levels of compliance. This paper analyses experimental responses to explore a dynamic interaction between audit and penalty rates as individuals learn how to comply with taxation. It compares the responses of subjects in experiments with responses that are predicted when individuals rely on an adaptive learning process (that offers information feedback about decision payoffs). This comparison suggests that learning is an important consideration when explaining differences between predicted and observed levels of tax compliance. 相似文献
5.
We propose a procedure to identify a lowest dose having greater effect than a threshold dose under the assumption of monotonicity of dose mean response in dose response test. So, we use statistics based on contrasts among sample means and apply a group sequential procedure to our procedure to identify effectively the dose. If we can identify the dose at an early step in the sequential test, since we can terminate the procedure with a few observations, the procedure is useful from an economical point of view. In a simulation studies, we compare the superiority among these procedures based on three contrasts. 相似文献
6.
《Omega》2014
Project control has been a research topic since decades that attracts both academics and practitioners. Project control systems indicate the direction of change in preliminary planning variables compared with actual performance. In case their current project performance deviates from the planned performance, a warning is indicated by the system in order to take corrective actions.Earned value management/earned schedule (EVM/ES) systems have played a central role in project control, and provide straightforward key performance metrics that measure the deviations between planned and actual performance in terms of time and cost. In this paper, a new statistical project control procedure sets tolerance limits to improve the discriminative power between progress situations that are either statistically likely or less likely to occur under the project baseline schedule. In this research, the tolerance limits are derived from subjective estimates for the activity durations of the project. Using the existing and commonly known EVM/ES metrics, the resulting project control charts will have an improved ability to trigger actions when variation in a project׳s progress exceeds certain predefined thresholdsA computational experiment has been set up to test the ability of these statistical project control charts to discriminate between variations that are either acceptable or unacceptable in the duration of the individual activities. The computational experiments compare the use of statistical tolerance limits with traditional earned value management thresholds and validate their power to report warning signals when projects tend to deviate significantly from the baseline schedule. 相似文献
7.
8.
《Journal of Statistical Computation and Simulation》2012,82(3-4):227-236
The widely-used Tietjen—Moore multiple outlier statistic has a defect as originally proposed in that it may test the wrong observations as outliers. The defect is corrected by redefinition and the statistic extended to make use of possible additional information on underlying variance. Results of simulation of the revised statistic are presented. 相似文献
9.
Jared L. Deutsch Clayton V. Deutsch 《Journal of statistical planning and inference》2012,142(3):763-772
Complex models can only be realized a limited number of times due to large computational requirements. Methods exist for generating input parameters for model realizations including Monte Carlo simulation (MCS) and Latin hypercube sampling (LHS). Recent algorithms such as maximinLHS seek to maximize the minimum distance between model inputs in the multivariate space. A novel extension of Latin hypercube sampling (LHSMDU) for multivariate models is developed here that increases the multidimensional uniformity of the input parameters through sequential realization elimination. Correlations are considered in the LHSMDU sampling matrix using a Cholesky decomposition of the correlation matrix. Computer code implementing the proposed algorithm supplements this article. A simulation study comparing MCS, LHS, maximinLHS and LHSMDU demonstrates that increased multidimensional uniformity can significantly improve realization efficiency and that LHSMDU is effective for large multivariate problems. 相似文献
10.
《Journal of Statistical Computation and Simulation》2012,82(4):229-248
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble. 相似文献