全文获取类型
收费全文 | 3826篇 |
免费 | 102篇 |
国内免费 | 13篇 |
专业分类
管理学 | 179篇 |
民族学 | 1篇 |
人口学 | 37篇 |
丛书文集 | 21篇 |
理论方法论 | 17篇 |
综合类 | 317篇 |
社会学 | 24篇 |
统计学 | 3345篇 |
出版年
2024年 | 1篇 |
2023年 | 21篇 |
2022年 | 31篇 |
2021年 | 23篇 |
2020年 | 67篇 |
2019年 | 145篇 |
2018年 | 160篇 |
2017年 | 266篇 |
2016年 | 124篇 |
2015年 | 79篇 |
2014年 | 110篇 |
2013年 | 1146篇 |
2012年 | 344篇 |
2011年 | 94篇 |
2010年 | 115篇 |
2009年 | 131篇 |
2008年 | 117篇 |
2007年 | 88篇 |
2006年 | 90篇 |
2005年 | 87篇 |
2004年 | 74篇 |
2003年 | 59篇 |
2002年 | 66篇 |
2001年 | 61篇 |
2000年 | 57篇 |
1999年 | 59篇 |
1998年 | 53篇 |
1997年 | 42篇 |
1996年 | 23篇 |
1995年 | 20篇 |
1994年 | 26篇 |
1993年 | 19篇 |
1992年 | 23篇 |
1991年 | 8篇 |
1990年 | 15篇 |
1989年 | 9篇 |
1988年 | 17篇 |
1987年 | 8篇 |
1986年 | 6篇 |
1985年 | 4篇 |
1984年 | 12篇 |
1983年 | 13篇 |
1982年 | 6篇 |
1981年 | 5篇 |
1980年 | 1篇 |
1979年 | 6篇 |
1978年 | 5篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有3941条查询结果,搜索用时 584 毫秒
971.
The class of generalized autoregressive conditional heteroskedastic (GARCH) models can be used to describe the volatility with less parameters than autoregressive conditional heteroskedastic (ARCH)-type models, their distributions are heavy-tailed, with time-dependent conditional variance, and are able to model clustering of volatility. Despite all these facts, the way that GARCH models are built imposes limits on the heaviness of the tails of their unconditional distribution. The class of randomized generalized autoregressive conditional heteroskedastic (R-GARCH) models includes the ARCH and GARCH models allowing the use of stable innovations. Estimation methods and empirical analysis of R-GARCH models are the focus of this work. We present the indirect inference method to estimate the R-GARCH models, some simulations and an empirical application. 相似文献
972.
The purpose of this paper is to build a model for aggregate losses which constitutes a crucial step in evaluating premiums for health insurance systems. It aims at obtaining the predictive distribution of the aggregate loss within each age class of insured persons over the time horizon involved in planning employing the Bayesian methodology. The model proposed using the Bayesian approach is a generalization of the collective risk model, a commonly used model for analysing risk of an insurance system. Aggregate loss prediction is based on past information on size of loss, number of losses and size of population at risk. In modelling the frequency and severity of losses, the number of losses is assumed to follow a negative binomial distribution, individual loss sizes are independent and identically distributed exponential random variables, while the number of insured persons in a finite number of possible age groups is assumed to follow the multinomial distribution. Prediction of aggregate losses is based on the Gibbs sampling algorithm which incorporates the missing data approach. 相似文献
973.
In this article, the exponentiated Weibull distribution is extended by the Marshall-Olkin family. Our new four-parameter family has a hazard rate function with various desired shapes depending on the choice of its parameters and, thus, it is very flexible in data modeling. It also contains two mixed distributions with applications to series and parallel systems in reliability and also contains several previously known lifetime distributions. We shall study some basic distributional properties of the new distribution. Some closed forms are derived for its moment generating function and moments as well as moments of its order statistics. The model parameters are estimated by the maximum likelihood method. The stress–strength parameter and its estimation are also investigated. Finally, an application of the new model is illustrated using two real datasets. 相似文献
974.
In this paper, maximum likelihood estimators (MLE) for both step and linear drift changes in the regression parameters of multivariate linear profiles are developed. Performance of the proposed estimators is compared under linear drift changes in the regression parameters when a combined MEWMA and Chi-square control charts method signals an out-of-control condition. The effect of smoothing parameter of MEWMA control charts, missing data, and multiple drift changes on the performance of the both estimators is also evaluated. The application of the proposed estimators is also investigated thorough a numerical example resulted from a real case. 相似文献
975.
Partially linear models are extensions of linear models that include a nonparametric function of some covariate allowing an adequate and more flexible handling of explanatory variables than in linear models. The difference-based estimation in partially linear models is an approach designed to estimate parametric component by using the ordinary least squares estimator after removing the nonparametric component from the model by differencing. However, it is known that least squares estimates do not provide useful information for the majority of data when the error distribution is not normal, particularly when the errors are heavy-tailed and when outliers are present in the dataset. This paper aims to find an outlier-resistant fit that represents the information in the majority of the data by robustly estimating the parametric and the nonparametric components of the partially linear model. Simulations and a real data example are used to illustrate the feasibility of the proposed methodology and to compare it with the classical difference-based estimator when outliers exist. 相似文献
976.
Adriano Ferreti Borgatto Caio Azevedo Aluisio Pinheiro Dalton Andrade 《统计学通讯:模拟与计算》2015,44(2):474-488
The aim of the present study is to determine the dependence of the estimation of individual abilities obtained by item response theory (IRT) in relation to the degree of test difficulty and to evaluate how the estimation error may be affected by the estimation method employed. It is shown that abilities in the scale region with little test information are more efficiently estimated using the maximum weighted likelihood estimation (WLE) method, particularly abilities belonging to the upper part of the scale. The study also demonstrates the importance of largest tests for ability estimation. 相似文献
977.
Marco Bee 《统计学通讯:模拟与计算》2015,44(8):2040-2060
This article deals with the estimation of the lognormal-Pareto and the lognormal-generalized Pareto distributions, for which a general result concerning asymptotic optimality of maximum likelihood estimation cannot be proved. We develop a method based on probability weighted moments, showing that it can be applied straightforwardly to the first distribution only. In the lognormal-generalized Pareto case, we propose a mixed approach combining maximum likelihood and probability weighted moments. Extensive simulations analyze the relative efficiencies of the methods in various setups. Finally, the techniques are applied to two real datasets in the actuarial and operational risk management fields. 相似文献
978.
In this article, we consider the problem of estimating the shape and scale parameters and predicting the unobserved removed data based on a progressive type II censored sample from the Weibull distribution. Maximum likelihood and Bayesian approaches are used to estimate the scale and shape parameters. The sampling-based method is used to draw Monte Carlo (MC) samples and it has been used to estimate the model parameters and also to predict the removed units in multiple stages of the censored sample. Two real datasets are presented and analyzed for illustrative purposes and Monte carlo simulations are performed to study the behavior of the proposed methods. 相似文献
979.
Joseph P. Kaboski Robert M. Townsend 《Econometrica : journal of the Econometric Society》2011,79(5):1357-1406
This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high‐yield indivisible investment opportunities. After estimation of parameters using preprogram data, we evaluate the model's ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost–benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 30 percent more than the sum of these benefits. 相似文献
980.
Gambino B 《Journal of gambling studies / co-sponsored by the National Council on Problem Gambling and Institute for the Study of Gambling and Commercial Gaming》2006,22(4):393-404
The difference between test accuracy and predictive accuracy is presented and defined. The failure to distinguish between these two types of measures is shown to have led to a misguided debate over the interpretation of prevalence estimates. The distinction between test accuracy defined as sensitivity and specificity, and predictive accuracy defined as positive and negative predictive value is shown to reflect the choice of the denominator used to calculate true positive, false positive, false negative, and true negative rates. It is further shown that any instrument will tend to overestimate prevalence in low base rate populations and underestimate it in those populations where prevalence is high. The implications of these observations are then discussed in terms of the need to define diagnostic thresholds that have clinical and policy relevance. 相似文献