全文获取类型
收费全文 | 3885篇 |
免费 | 105篇 |
国内免费 | 46篇 |
专业分类
管理学 | 311篇 |
劳动科学 | 1篇 |
民族学 | 18篇 |
人口学 | 50篇 |
丛书文集 | 331篇 |
理论方法论 | 90篇 |
综合类 | 1880篇 |
社会学 | 160篇 |
统计学 | 1195篇 |
出版年
2024年 | 4篇 |
2023年 | 20篇 |
2022年 | 42篇 |
2021年 | 46篇 |
2020年 | 63篇 |
2019年 | 89篇 |
2018年 | 111篇 |
2017年 | 120篇 |
2016年 | 110篇 |
2015年 | 107篇 |
2014年 | 183篇 |
2013年 | 529篇 |
2012年 | 290篇 |
2011年 | 246篇 |
2010年 | 220篇 |
2009年 | 184篇 |
2008年 | 197篇 |
2007年 | 238篇 |
2006年 | 188篇 |
2005年 | 172篇 |
2004年 | 173篇 |
2003年 | 178篇 |
2002年 | 149篇 |
2001年 | 126篇 |
2000年 | 84篇 |
1999年 | 38篇 |
1998年 | 23篇 |
1997年 | 24篇 |
1996年 | 11篇 |
1995年 | 8篇 |
1994年 | 11篇 |
1993年 | 6篇 |
1992年 | 14篇 |
1991年 | 7篇 |
1990年 | 4篇 |
1989年 | 5篇 |
1988年 | 2篇 |
1987年 | 2篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1984年 | 3篇 |
1983年 | 1篇 |
1982年 | 3篇 |
1977年 | 1篇 |
排序方式: 共有4036条查询结果,搜索用时 24 毫秒
991.
Aviral Kumar Tiwari 《Journal of applied statistics》2015,42(3):662-675
The literature devoted to the export-led growth (ELG) hypothesis, which is of utmost importance for policymaking in emerging countries, provides mixed evidence for the validity of the hypothesis. Recent contributions focus on the time-dependence of the relationship between export and output growth using rolling causality techniques based on vector autoregressive models. These models focus on a short-term view which captures single policy-induced developments. However, long-term structural changes cannot be covered by examinations related to the short-term. This paper hence examines the time-varying validity of the ELG hypothesis for India for the period 1960–2011 using rolling causality techniques for both the short-run and long-run horizon. For the first time, window-wise optimal lag-selection procedures are applied in connection with these techniques. We find that exports long-run caused output growth from 1997 until 2009 which can be seen as a consequence of political reforms of the 1990s that boosted economic growth by generating foreign direct investment opportunities and higher exports. For the short-run, export significantly caused output in the period 1998–2003 which followed a concentration of liberalization measures in 1997. Causality in the reversed direction, from output to exports, only seems to be relevant in the short-run. 相似文献
992.
Bertil Wegmann 《Journal of applied statistics》2015,42(2):380-397
Private and common values (CVs) are the two main competing valuation models in auction theory and empirical work. In the framework of second-price auctions, we compare the empirical performance of the independent private value (IPV) model to the CV model on a number of different dimensions, both on real data from eBay coin auctions and on simulated data. Both models fit the eBay data well with a slight edge for the CV model. However, the differences between the fit of the models seem to depend to some extent on the complexity of the models. According to log predictive score the IPV model predicts auction prices slightly better in most auctions, while the more robust CV model is much better at predicting auction prices in more unusual auctions. In terms of posterior odds, the CV model is clearly more supported by the eBay data. 相似文献
993.
We propose penalized minimum φ-divergence estimator for parameter estimation and variable selection in logistic regression. Using an appropriate penalty function, we show that penalized φ-divergence estimator has oracle property. With probability tending to 1, penalized φ-divergence estimator identifies the true model and estimates nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized φ-divergence estimator is that it produces estimates of nonzero parameters efficiently than penalized maximum likelihood estimator when sample size is small and is equivalent to it for large one. Numerical simulations confirm our findings. 相似文献
994.
《Journal of Statistical Computation and Simulation》2012,82(4):682-695
In this paper, we propose a new Bayesian inference approach for classification based on the traditional hinge loss used for classical support vector machines, which we call the Bayesian Additive Machine (BAM). Unlike existing approaches, the new model has a semiparametric discriminant function where some feature effects are nonlinear and others are linear. This separation of features is achieved automatically during model fitting without user pre-specification. Following the literature on sparse regression of high-dimensional models, we can also identify the irrelevant features. By introducing spike-and-slab priors using two sets of indicator variables, these multiple goals are achieved simultaneously and automatically, without any parameter tuning such as cross-validation. An efficient partially collapsed Markov chain Monte Carlo algorithm is developed for posterior exploration based on a data augmentation scheme for the hinge loss. Our simulations and three real data examples demonstrate that the new approach is a strong competitor to some approaches that were proposed recently for dealing with challenging classification examples with high dimensionality. 相似文献
995.
Vlasios Voudouris Robert Gilchrist Robert Rigby John Sedgwick Dimitrios Stasinopoulos 《Journal of applied statistics》2012,39(6):1279-1293
This paper illustrates the power of modern statistical modelling in understanding processes characterised by data that are skewed and have heavy tails. Our particular substantive problem concerns film box-office revenues. We are able to show that traditional modelling techniques based on the Pareto–Levy–Mandelbrot distribution led to what is actually a poorly supported conclusion that these data have infinite variance. This in turn led to the dominant paradigm of the movie business that ‘nobody knows anything’ and hence that box-office revenues cannot be predicted. Using the Box–Cox power exponential distribution within the generalized additive models for location, scale and shape framework, we are able to model box-office revenues and develop probabilistic statements about revenues. 相似文献
996.
In varying-coefficient models, an important question is to determine whether some of the varying coefficients are actually invariant coefficients. This article proposes a penalized likelihood method in the framework of the smoothing spline ANOVA models, with a penalty designed toward the goal of automatically distinguishing varying coefficients and those which are not varying. Unlike the stepwise procedure, the method simultaneously quantifies and estimates the coefficients. An efficient algorithm is given and ways of choosing the smoothing parameters are discussed. Simulation results and an analysis on the Boston housing data illustrate the usefulness of the method. The proposed approach is further extended to longitudinal data analysis. 相似文献
997.
We propose a new approach to the selection of partially linear models based on the conditional expected prediction square loss function, which is estimated using the bootstrap. Because of the different speeds of convergence of the linear and the nonlinear parts, a key idea is to select each part separately. In the first step, we select the nonlinear components using an ' m -out-of- n ' residual bootstrap that ensures good properties for the nonparametric bootstrap estimator. The second step selects the linear components from the remaining explanatory variables, and the non-zero parameters are selected based on a two-level residual bootstrap. We show that the model selection procedure is consistent under some conditions, and our simulations suggest that it selects the true model most often than the other selection procedures considered. 相似文献
998.
Building multivariable prognostic and diagnostic models: transformation of the predictors by using fractional polynomials 总被引:4,自引:0,他引:4
W. Sauerbrei & P. Royston 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》1999,162(1):71-94
To be useful to clinicians, prognostic and diagnostic indices must be derived from accurate models developed by using appropriate data sets. We show that fractional polynomials, which extend ordinary polynomials by including non-positive and fractional powers, may be used as the basis of such models. We describe how to fit fractional polynomials in several continuous covariates simultaneously, and we propose ways of ensuring that the resulting models are parsimonious and consistent with basic medical knowledge. The methods are applied to two breast cancer data sets, one from a prognostic factors study in patients with positive lymph nodes and the other from a study to diagnose malignant or benign tumours by using colour Doppler blood flow mapping. We investigate the problems of biased parameter estimates in the final model and overfitting using cross-validation calibration to estimate shrinkage factors. We adopt bootstrap resampling to assess model stability. We compare our new approach with conventional modelling methods which apply stepwise variables selection to categorized covariates. We conclude that fractional polynomial methodology can be very successful in generating simple and appropriate models. 相似文献
999.
The Bayesian CART (classification and regression tree) approach proposed by Chipman, George and McCulloch (1998) entails putting a prior distribution on the set of all CART models and then using stochastic search to select a model. The main thrust of this paper is to propose a new class of hierarchical priors which enhance the potential of this Bayesian approach. These priors indicate a preference for smooth local mean structure, resulting in tree models which shrink predictions from adjacent terminal node towards each other. Past methods for tree shrinkage have searched for trees without shrinking, and applied shrinkage to the identified tree only after the search. By using hierarchical priors in the stochastic search, the proposed method searches for shrunk trees that fit well and improves the tree through shrinkage of predictions. 相似文献
1000.
T. P. Hettmansperger & Hoben Thomas 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(4):811-825
We consider ways to estimate the mixing proportions in a finite mixture distribution or to estimate the number of components of the mixture distribution without making parametric assumptions about the component distributions. We require a vector of observations on each subject. This vector is mapped into a vector of 0s and 1s and summed. The resulting distribution of sums can be modelled as a mixture of binomials. We then work with the binomial mixture. The efficiency and robustness of this method are compared with the strategy of assuming multivariate normal mixtures when, typically, the true underlying mixture distribution is different. It is shown that in many cases the approach based on simple binomial mixtures is superior. 相似文献