首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16248篇
  免费   552篇
  国内免费   214篇
管理学   1892篇
劳动科学   2篇
民族学   63篇
人才学   3篇
人口学   474篇
丛书文集   781篇
理论方法论   377篇
综合类   7494篇
社会学   565篇
统计学   5363篇
  2024年   20篇
  2023年   127篇
  2022年   223篇
  2021年   246篇
  2020年   373篇
  2019年   474篇
  2018年   530篇
  2017年   671篇
  2016年   561篇
  2015年   572篇
  2014年   891篇
  2013年   2190篇
  2012年   1183篇
  2011年   1023篇
  2010年   850篇
  2009年   842篇
  2008年   921篇
  2007年   890篇
  2006年   814篇
  2005年   698篇
  2004年   579篇
  2003年   493篇
  2002年   428篇
  2001年   372篇
  2000年   231篇
  1999年   180篇
  1998年   101篇
  1997年   103篇
  1996年   72篇
  1995年   63篇
  1994年   50篇
  1993年   41篇
  1992年   37篇
  1991年   41篇
  1990年   24篇
  1989年   19篇
  1988年   15篇
  1987年   11篇
  1986年   8篇
  1985年   13篇
  1984年   8篇
  1983年   9篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   4篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
71.
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods.  相似文献   
72.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   
73.
Donor imputation is frequently used in surveys. However, very few variance estimation methods that take into account donor imputation have been developed in the literature. This is particularly true for surveys with high sampling fractions using nearest donor imputation, often called nearest‐neighbour imputation. In this paper, the authors develop a variance estimator for donor imputation based on the assumption that the imputed estimator of a domain total is approximately unbiased under an imputation model; that is, a model for the variable requiring imputation. Their variance estimator is valid, irrespective of the magnitude of the sampling fractions and the complexity of the donor imputation method, provided that the imputation model mean and variance are accurately estimated. They evaluate its performance in a simulation study and show that nonparametric estimation of the model mean and variance via smoothing splines brings robustness with respect to imputation model misspecifications. They also apply their variance estimator to real survey data when nearest‐neighbour imputation has been used to fill in the missing values. The Canadian Journal of Statistics 37: 400–416; 2009 © 2009 Statistical Society of Canada  相似文献   
74.
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again.  相似文献   
75.
In analogy with the cumulative residual entropy recently proposed by Wang et al. [2003a. A new and robust information theoretic measure and its application to image alignment. In: Information Processing in Medical Imaging. Lecture Notes in Computer Science, vol. 2732, Springer, Heidelberg, pp. 388–400; 2003b. Cumulative residual entropy, a new measure of information and its application to image alignment. In: Proceedings on the Ninth IEEE International Conference on Computer Vision (ICCV’03), vol. 1, IEEE Computer Society Press, Silver Spring, MD, pp. 548–553], we introduce and study the cumulative entropy, which is a new measure of information alternative to the classical differential entropy. We show that the cumulative entropy of a random lifetime X can be expressed as the expectation of its mean inactivity time evaluated at X. Hence, our measure is particularly suitable to describe the information in problems related to ageing properties of reliability theory based on the past and on the inactivity times. Our results include various bounds to the cumulative entropy, its connection to the proportional reversed hazards model, and the study of its dynamic version that is shown to be increasing if the mean inactivity time is increasing. The empirical cumulative entropy is finally proposed to estimate the new information measure.  相似文献   
76.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
77.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   
78.
Abstract.  Collapsibility means that the same statistical result of interest can be obtained before and after marginalization over some variables. In this paper, we discuss three kinds of collapsibility for directed acyclic graphs (DAGs): estimate collapsibility, conditional independence collapsibility and model collapsibility. Related to collapsibility, we discuss removability of variables from a DAG. We present conditions for these three different kinds of collapsibility and relationships among them. We give algorithms to find a minimum variable set containing a variable subset of interest onto which a statistical result is collapsible.  相似文献   
79.
使用允许长记忆参数d服从区制转换的MS—ARFIMA模型对中国月度通货膨胀路径的动态行为进行新的实证研究,结果显示:中国通货膨胀不仅均值水平和不确定性存在着“低通胀”区制和“高通胀”区制,而且更为重要的是,通货膨胀序列的平稳性也表现出显著的区制转换动态。“低通胀”区制下,长记忆参数d1=0.361,说明通货膨胀是协方差平稳序列,“高通胀”区制下,长记忆参数d2=1.145,说明通货膨胀是非平稳序列。这一新的研究结论意味着中国通货膨胀冲击的持久性效应也存在相应的区制转移变化。这要求央行在管控通货膨胀过程中,既要考虑均值和不确定性的区制变化,又要兼顾平稳性和持久性的区制变化。  相似文献   
80.
以小麦和大豆为例,研究2002年1月至2012年6月中国粮食价格波动特征。首先利用X-12-ARIMA模型对价格序列进行季节调整,然后运用ARCH类模型对剥离季节因素的价格序列进行波动分析。结果发现:中国粮食价格季节性波动逐年减弱;粮食价格具有明显的波动集簇性,前期价格波动和外部冲击对后期价格的影响具有持续性;粮食市场不存在"高风险、高回报"特征;小麦价格波动的非对称性不显著,而大豆价格波动则呈现明显的非对称特征,且上期价格上涨信息引发的波动要大于下跌信息。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号