全文获取类型
收费全文 | 3955篇 |
免费 | 126篇 |
国内免费 | 15篇 |
专业分类
管理学 | 407篇 |
民族学 | 6篇 |
人口学 | 69篇 |
丛书文集 | 57篇 |
理论方法论 | 97篇 |
综合类 | 373篇 |
社会学 | 251篇 |
统计学 | 2836篇 |
出版年
2024年 | 1篇 |
2023年 | 27篇 |
2022年 | 26篇 |
2021年 | 39篇 |
2020年 | 69篇 |
2019年 | 123篇 |
2018年 | 175篇 |
2017年 | 251篇 |
2016年 | 121篇 |
2015年 | 132篇 |
2014年 | 137篇 |
2013年 | 910篇 |
2012年 | 300篇 |
2011年 | 148篇 |
2010年 | 132篇 |
2009年 | 164篇 |
2008年 | 157篇 |
2007年 | 148篇 |
2006年 | 127篇 |
2005年 | 141篇 |
2004年 | 116篇 |
2003年 | 96篇 |
2002年 | 73篇 |
2001年 | 74篇 |
2000年 | 69篇 |
1999年 | 56篇 |
1998年 | 50篇 |
1997年 | 38篇 |
1996年 | 19篇 |
1995年 | 16篇 |
1994年 | 29篇 |
1993年 | 17篇 |
1992年 | 19篇 |
1991年 | 13篇 |
1990年 | 11篇 |
1989年 | 8篇 |
1988年 | 12篇 |
1987年 | 7篇 |
1986年 | 5篇 |
1985年 | 8篇 |
1984年 | 6篇 |
1983年 | 9篇 |
1982年 | 8篇 |
1981年 | 1篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有4096条查询结果,搜索用时 636 毫秒
291.
The problem of limiting the disclosure of information gathered on a set of companies or individuals (the respondents) is considered, the aim being to provide useful information while preserving confidentiality of sensitive information. The paper proposes a method which explicitly preserves certain information contained in the data. The data are assumed to consist of two sets of information on each respondent: public data and specific survey data. It is assumed in this paper that both sets of data are liable to be released for a subset of respondents. However, the public data will be altered in some way to preserve confidentiality whereas the specific survey data is to be disclosed without alteration. The paper proposes a model based approach to this problem by utilizing the information contained in the sufficient statistics obtained from fitting a model to the public data by conditioning on the survey data. Deterministic and stochastic variants of the method are considered. 相似文献
292.
Thomas Nittner 《Statistical Methods and Applications》2003,12(2):195-210
The additive model
is considered when some observations on x are missing at random but corresponding observations on y are available. Especially for this model, missing at random is an interesting case because the complete case analysis is expected to be no more suitable. A simulation experiment is reported and the different methods are compared based on their superiority with respect to the sample mean squared error. Some focus is also given on the sample variance and the estimated bias. In detail, the complete case analysis, a kind of stochastic mean imputation, a single imputation and the nearest neighbor imputation are discussed. 相似文献
293.
本文在提出了新的具有分段可微特征的提前期压缩成本函数的基础上,构建了一个两阶段(Q,r)库存模型。并在一定服务水平约束条件下,进一步利用成本函数分段可微的特性结合数学分析的理论将原问题分解为一个无约束问题和一个等式约束问题。理论分析及算例表明,可以通过合理的确定模型中的订货量Q、订货点r、提前期加速因子τ等参数值使得在服务水平得到一定提高的同时成本也得到优化。 相似文献
294.
《商业与经济统计学杂志》2013,31(1):138-149
We introduce a new multivariate GARCH model with multivariate thresholds in conditional correlations and develop a two-step estimation procedure that is feasible in large dimensional applications. Optimal threshold functions are estimated endogenously from the data and the model conditional covariance matrix is ensured to be positive definite. We study the empirical performance of our model in two applications using U.S. stock and bond market data. In both applications our model has, in terms of statistical and economic significance, higher forecasting power than several other multivariate GARCH models for conditional correlations. 相似文献
295.
Patrizio Frederic 《统计学通讯:模拟与计算》2013,42(7):1263-1269
We display the first two moment functions of the Logitnormal(μ, σ2) family of distributions, conveniently described in terms of the Normal mean, μ, and the Normal signal-to-noise ratio, μ/σ, parameters that generate the family. Long neglected on account of the numerical integrations required to compute them, awareness of these moment functions should aid the sensible interpretation of logistic regression statistics and the specification of “diffuse” prior distributions in hierarchical models, which can be deceiving. We also use numerical integration to compare the correlation between bivariate Logitnormal variables with the correlation between the bivariate Normal variables from which they are transformed. 相似文献
296.
Earlier attempts at reconciling disparate substitution elasticity estimates examined differences in separability hypotheses, data bases, and estimation techniques, as well as methods employed to construct capital service prices. Although these studies showed that differences in elasticity estimates between two or three studies may be attributable to the aforementioned features of the econometric models, they have been unable to demonstrate this link statistically and establish the existence of systematic relationships between features of the econometric models and the perception of production technologies generated by those models. Using sectoral data covering the entire production side of the U.S. economy, we estimate 34 production models for alternative definitions of the capital service price. We employ substitution elasticities calculated from these models as dependent variables in the statistical search for systematic relationships between features of the econometric models and perceptions of the sectoral technology as characterized by the elasticities. Statistically significant systematic effects are found between the monotonicity and concavity properties of the cost functions and service price–technical change specifications as well as between substitution elasticities. 相似文献
297.
Göran Kauermann Christian Schellhase David Ruppert 《Scandinavian Journal of Statistics》2013,40(4):685-705
The paper introduces a new method for flexible spline fitting for copula density estimation. Spline coefficients are penalized to achieve a smooth fit. To weaken the curse of dimensionality, instead of a full tensor spline basis, a reduced tensor product based on so called sparse grids (Notes Numer. Fluid Mech. Multidiscip. Des., 31, 1991, 241‐251) is used. To achieve uniform margins of the copula density, linear constraints are placed on the spline coefficients, and quadratic programming is used to fit the model. Simulations and practical examples accompany the presentation. 相似文献
298.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here. 相似文献
299.
The joint models for longitudinal data and time-to-event data have recently received numerous attention in clinical and epidemiologic studies. Our interest is in modeling the relationship between event time outcomes and internal time-dependent covariates. In practice, the longitudinal responses often show non linear and fluctuated curves. Therefore, the main aim of this paper is to use penalized splines with a truncated polynomial basis to parameterize the non linear longitudinal process. Then, the linear mixed-effects model is applied to subject-specific curves and to control the smoothing. The association between the dropout process and longitudinal outcomes is modeled through a proportional hazard model. Two types of baseline risk functions are considered, namely a Gompertz distribution and a piecewise constant model. The resulting models are referred to as penalized spline joint models; an extension of the standard joint models. The expectation conditional maximization (ECM) algorithm is applied to estimate the parameters in the proposed models. To validate the proposed algorithm, extensive simulation studies were implemented followed by a case study. In summary, the penalized spline joint models provide a new approach for joint models that have improved the existing standard joint models. 相似文献
300.
The objective of this article is to evaluate the performance of the COM‐Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM‐Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over‐ or underdispersion. Over the last year, the COM‐Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson‐gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM‐Poisson models were estimated using crash data collected at 162 railway‐highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM‐Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set. 相似文献