全文获取类型
收费全文 | 16248篇 |
免费 | 552篇 |
国内免费 | 214篇 |
专业分类
管理学 | 1892篇 |
劳动科学 | 2篇 |
民族学 | 63篇 |
人才学 | 3篇 |
人口学 | 474篇 |
丛书文集 | 781篇 |
理论方法论 | 377篇 |
综合类 | 7494篇 |
社会学 | 565篇 |
统计学 | 5363篇 |
出版年
2024年 | 20篇 |
2023年 | 127篇 |
2022年 | 223篇 |
2021年 | 246篇 |
2020年 | 373篇 |
2019年 | 474篇 |
2018年 | 530篇 |
2017年 | 671篇 |
2016年 | 561篇 |
2015年 | 572篇 |
2014年 | 891篇 |
2013年 | 2190篇 |
2012年 | 1183篇 |
2011年 | 1023篇 |
2010年 | 850篇 |
2009年 | 842篇 |
2008年 | 921篇 |
2007年 | 890篇 |
2006年 | 814篇 |
2005年 | 698篇 |
2004年 | 579篇 |
2003年 | 493篇 |
2002年 | 428篇 |
2001年 | 372篇 |
2000年 | 231篇 |
1999年 | 180篇 |
1998年 | 101篇 |
1997年 | 103篇 |
1996年 | 72篇 |
1995年 | 63篇 |
1994年 | 50篇 |
1993年 | 41篇 |
1992年 | 37篇 |
1991年 | 41篇 |
1990年 | 24篇 |
1989年 | 19篇 |
1988年 | 15篇 |
1987年 | 11篇 |
1986年 | 8篇 |
1985年 | 13篇 |
1984年 | 8篇 |
1983年 | 9篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
The Analysis of Crop Variety Evaluation Data in Australia 总被引:5,自引:0,他引:5
Alison Smith Brian Cullis & Arthur Gilmour 《Australian & New Zealand Journal of Statistics》2001,43(2):129-145
The major aim of crop variety evaluation is to predict the future performance of varieties. This paper presents the routine statistical analysis of data from late-stage testing of crop varieties in Australia. It uses a two-stage approach for analysis. The data from individual trials from the current year are analysed using spatial techniques. The resultant table of variety-by-trial means is combined with tables from previous years to form the data for an overall mixed model analysis. Weights allow for the data being estimates with varying accuracy. In view of the predictive aim of the analysis, variety effects and interactions are regarded as random effects. Appropriate inferential tools have been developed to assist with interpretation of the results. Analyses must be conducted in a timely manner so that variety predictions can be published and disseminated to growers immediately after harvest each year. Factors which facilitate this include easy access to historic data and the use of specialist mixed model software. 相似文献
82.
《Australian & New Zealand Journal of Statistics》2001,43(4):495-499
Books reviewed:
Philip Hans Franses & Dick van Dijk, Non-linear Time Series Models in Empirical Finance
Herbert Spirer, Louise Spirer & A.J. Jaffe, Misused Statistics
Deborah J. Bennett, Randomness
C.E. Linneborg, Data Analysis by Resampling: Concepts and Applications
I. Clark and W.V. Harper, Practical Geostatistics 2000 相似文献
Philip Hans Franses & Dick van Dijk, Non-linear Time Series Models in Empirical Finance
Herbert Spirer, Louise Spirer & A.J. Jaffe, Misused Statistics
Deborah J. Bennett, Randomness
C.E. Linneborg, Data Analysis by Resampling: Concepts and Applications
I. Clark and W.V. Harper, Practical Geostatistics 2000 相似文献
83.
Simplified Estimating Functions for Diffusion Models with a High-dimensional Parameter 总被引:2,自引:0,他引:2
We consider estimating functions for discretely observed diffusion processes of the following type: for one part of the parameter of interest we propose to use a simple and explicit estimating function of the type studied by Kessler (2000); for the remaining part of the parameter we use a martingale estimating function. Such an approach is particularly useful in practical applications when the parameter is high-dimensional. It is also often necessary to supplement a simple estimating function by another type of estimating function because only the part of the parameter on which the invariant measure depends can be estimated by a simple estimating function. Under regularity conditions the resulting estimators are consistent and asymptotically normal. Several examples are considered in order to demonstrate the idea of the estimating procedure. The method is applied to two data sets comprising wind velocities and stock prices. In one example we also propose a general method for constructing diffusion models with a prescribed marginal distribution which have a flexible dependence structure. 相似文献
84.
Hanfeng Chen Jiahua Chen & John D. Kalbfleisch 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2001,63(1):19-29
Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a χ-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes. 相似文献
85.
This paper studies the partially time-varying coefficient models where some covariates are measured with additive errors. In order to overcome the bias of the usual profile least squares estimation when measurement errors are ignored, we propose a modified profile least squares estimator of the regression parameter and construct estimators of the nonlinear coefficient function and error variance. The proposed three estimators are proved to be asymptotically normal under mild conditions. In addition, we introduce the profile likelihood ratio test and then demonstrate that it follows an asymptotically χ2 distribution under the null hypothesis. Finite sample behavior of the estimators is investigated via simulations too. 相似文献
86.
This article reviews Bayesian inference from the perspective that the designated model is misspecified. This misspecification has implications in interpretation of objects, such as the prior distribution, which has been the cause of recent questioning of the appropriateness of Bayesian inference in this scenario. The main focus of this article is to establish the suitability of applying the Bayes update to a misspecified model, and relies on representation theorems for sequences of symmetric distributions; the identification of parameter values of interest; and the construction of sequences of distributions which act as the guesses as to where the next observation is coming from. A conclusion is that a clear identification of the fundamental starting point for the Bayesian is described. 相似文献
87.
In this paper, we propose a methodology to analyze longitudinal data through distances between pairs of observations (or individuals) with regard to the explanatory variables used to fit continuous response variables. Restricted maximum-likelihood and generalized least squares are used to estimate the parameters in the model. We applied this new approach to study the effect of gender and exposure on the deviant behavior variable with respect to tolerance for a group of youths studied over a period of 5 years. Were performed simulations where we compared our distance-based method with classic longitudinal analysis with both AR(1) and compound symmetry correlation structures. We compared them under Akaike and Bayesian information criterions, and the relative efficiency of the generalized variance of the errors of each model. We found small gains in the proposed model fit with regard to the classical methodology, particularly in small samples, regardless of variance, correlation, autocorrelation structure and number of time measurements. 相似文献
88.
A survey on health insurance was conducted in July and August of 2011 in three major cities in China. In this study, we analyze the household coverage rate, which is an important index of the quality of health insurance. The coverage rate is restricted to the unit interval [0, 1], and it may differ from other rate data in that the “two corners” are nonzero. That is, there are nonzero probabilities of zero and full coverage. Such data may also be encountered in economics, finance, medicine, and many other areas. The existing approaches may not be able to properly accommodate such data. In this study, we develop a three-part model that properly describes fractional response variables with non-ignorable zeros and ones. We investigate estimation and inference under two proportional constraints on the regression parameters. Such constraints may lead to more lucid interpretations and fewer unknown parameters and hence more accurate estimation. A simulation study is conducted to compare the performance of constrained and unconstrained models and show that estimation under constraint can be more efficient. The analysis of household health insurance coverage data suggests that household size, income, expense, and presence of chronic disease are associated with insurance coverage. 相似文献
89.
Bayesian model comparison for compartmental models with applications in positron emission tomography
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET. 相似文献
90.
A pivotal characteristic of credit defaults that is ignored by most credit scoring models is the rarity of the event. The most widely used model to estimate the probability of default is the logistic regression model. Since the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks, for example, underestimation of the default probability, which could be very risky for banks. In order to overcome these drawbacks, we propose the generalized extreme value regression model. In particular, in a generalized linear model (GLM) with the binary-dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure used is the maximum-likelihood method. This model accommodates skewness and it presents a generalisation of GLMs with complementary log–log link function. We analyse its performance by simulation studies. Finally, we apply the proposed model to empirical data on Italian small and medium enterprises. 相似文献