全文获取类型
收费全文 | 1108篇 |
免费 | 21篇 |
国内免费 | 9篇 |
专业分类
管理学 | 105篇 |
民族学 | 3篇 |
人口学 | 7篇 |
丛书文集 | 10篇 |
理论方法论 | 9篇 |
综合类 | 275篇 |
社会学 | 10篇 |
统计学 | 719篇 |
出版年
2023年 | 2篇 |
2022年 | 5篇 |
2021年 | 5篇 |
2020年 | 21篇 |
2019年 | 25篇 |
2018年 | 35篇 |
2017年 | 43篇 |
2016年 | 33篇 |
2015年 | 38篇 |
2014年 | 34篇 |
2013年 | 279篇 |
2012年 | 75篇 |
2011年 | 40篇 |
2010年 | 29篇 |
2009年 | 41篇 |
2008年 | 38篇 |
2007年 | 27篇 |
2006年 | 32篇 |
2005年 | 37篇 |
2004年 | 29篇 |
2003年 | 25篇 |
2002年 | 29篇 |
2001年 | 21篇 |
2000年 | 20篇 |
1999年 | 14篇 |
1998年 | 18篇 |
1997年 | 27篇 |
1996年 | 10篇 |
1995年 | 13篇 |
1994年 | 15篇 |
1993年 | 9篇 |
1992年 | 7篇 |
1991年 | 12篇 |
1990年 | 7篇 |
1989年 | 3篇 |
1988年 | 10篇 |
1987年 | 6篇 |
1986年 | 7篇 |
1985年 | 5篇 |
1984年 | 3篇 |
1983年 | 5篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1978年 | 1篇 |
1977年 | 1篇 |
排序方式: 共有1138条查询结果,搜索用时 62 毫秒
41.
AbstractWe propose a cure rate survival model by assuming that the number of competing causes of the event of interest follows the negative binomial distribution and the time to the event of interest has the Birnbaum-Saunders distribution. Further, the new model includes as special cases some well-known cure rate models published recently. We consider a frequentist analysis for parameter estimation of the negative binomial Birnbaum-Saunders model with cure rate. Then, we derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We illustrate the usefulness of the proposed model in the analysis of a real data set from the medical area. 相似文献
42.
Yasumasa Matsuda 《统计学通讯:理论与方法》2013,42(9):2257-2273
In this paper, functional coefficient autoregressive (FAR) models proposed by Chen and Tsay (1993) are considered. We propose a diagnostic statistic for FAR models constructed by comparing between parametric and nonparametric estimators of the functional form of the FAR models. We show asymptotic properties of our statistic mathematically and it can be applied to the estimation of the delay parameter and the specification of the functional form of FAR models. 相似文献
43.
王辉 《内蒙古民族大学学报》1997,(3)
比较了不同的几种岭参数选择方法的应用效果,结果表明,几种方法中没有一种方法被认为是优于其它的方法。但在均方误差准则下.几种岭参数选择方法所获得的估计都改进了设计阵呈病态时的最小二乘估计。 相似文献
44.
Liangjun Su Zhentao Shi Peter C. B. Phillips 《Econometrica : journal of the Econometric Society》2016,84(6):2215-2264
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented. 相似文献
45.
The binary logistic regression is a widely used statistical method when the dependent variable has two categories. In most of the situations of logistic regression, independent variables are collinear which is called the multicollinearity problem. It is known that multicollinearity affects the variance of maximum likelihood estimator (MLE) negatively. Therefore, this article introduces new shrinkage parameters for the Liu-type estimators in the Liu (2003) in the logistic regression model defined by Huang (2012) in order to decrease the variance and overcome the problem of multicollinearity. A Monte Carlo study is designed to show the goodness of the proposed estimators over MLE in the sense of mean squared error (MSE) and mean absolute error (MAE). Moreover, a real data case is given to demonstrate the advantages of the new shrinkage parameters. 相似文献
46.
We propose an exploratory data analysis approach when data are observed as intervals in a nonparametric regression setting. The interval-valued data contain richer information than single-valued data in the sense that they provide both center and range information of the underlying structure. Conventionally, these two attributes have been studied separately as traditional tools can be readily used for single-valued data analysis. We propose a unified data analysis tool that attempts to capture the relationship between response and covariate by simultaneously accounting for variability present in the data. It utilizes a kernel smoothing approach, which is conducted in scale-space so that it considers a wide range of smoothing parameters rather than selecting an optimal value. It also visually summarizes the significance of trends in the data as a color map across multiple locations and scales. We demonstrate its effectiveness as an exploratory data analysis tool for interval-valued data using simulated and real examples. 相似文献
47.
Frank R. Wondolowski 《决策科学》1991,22(4):792-811
A criticism of linear programming has been that the data which are available in practice are too inexact and unreliable for linear programming to properly work. Managers are therefore concerned with how much actual values may differ from the estimates that were used in the model before the results become irrelevant. Sensitivity analysis emerged to help deal with the uncertainties inherent in the linear programming model. However, the ranges calculated are generally valid only when a single coefficient is varied. An extension of sensitivity analysis, the 100 Percent Rule, allows the simultaneous variation of more than one element in a vector, but does not permit the independent variation of the elements. A tolerance approach to sensitivity analysis enables the consideration of simultaneous and independent change of more than one coefficient. However, the ranges developed are unnecessarily restricted and may be reduced in width to zero when primal or dual degeneracy exists. This paper presents an extension of the tolerance approach which reduces the limitations of both the traditional and tolerance approaches to sensitivity analysis. 相似文献
48.
Joseph S. Martinich 《决策科学》1991,22(1):53-59
For convex and concave mathematical programs restrictive constraints (i.e., their deletion would change the optimum) will always be binding at the optimum, and vice versa. Less well-known is the fact that this property does not hold more generally, even for problems with convex feasible sets. This paper demonstrates the latter fact using numerical illustrations of common classes of problems. It then discusses the implications for public policy analysis, econometric estimation, and solution algorithms. 相似文献
49.
The importance of sensitivity analysis information in linear programming has been stressed in the management science literature for some time. Indeed, Gal [3] has devoted an entire text to just this issue. Linear programs with common inputs (cost coefficients or right-hand-side values) present a problem in that classical sensitivity analysis does not allow for the simultaneous changes required to determine the sensitivity of these models to common inputs. We first survey the approaches previously developed for simultaneous-change sensitivity analysis and cast them in the framework of the special common input case. These general techniques are compared to a simple aggregate variable technique that has not received attention in the literature. 相似文献
50.
A smoothed bootstrap method is presented for the purpose of bandwidth selection in nonparametric hazard rate estimation for iid data. In this context, two new bootstrap bandwidth selectors are established based on the exact expression of the bootstrap version of the mean integrated squared error of some approximations of the kernel hazard rate estimator. This is very useful since Monte Carlo approximation is no longer needed for the implementation of the two bootstrap selectors. A simulation study is carried out in order to show the empirical performance of the new bootstrap bandwidths and to compare them with other existing selectors. The methods are illustrated by applying them to a diabetes data set. 相似文献