首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11686篇
  免费   855篇
  国内免费   137篇
管理学   1520篇
劳动科学   1篇
民族学   29篇
人才学   1篇
人口学   166篇
丛书文集   410篇
理论方法论   959篇
综合类   3069篇
社会学   2037篇
统计学   4486篇
  2024年   22篇
  2023年   116篇
  2022年   148篇
  2021年   266篇
  2020年   349篇
  2019年   611篇
  2018年   530篇
  2017年   751篇
  2016年   636篇
  2015年   576篇
  2014年   717篇
  2013年   1677篇
  2012年   834篇
  2011年   580篇
  2010年   542篇
  2009年   447篇
  2008年   505篇
  2007年   458篇
  2006年   431篇
  2005年   409篇
  2004年   392篇
  2003年   320篇
  2002年   271篇
  2001年   303篇
  2000年   215篇
  1999年   98篇
  1998年   79篇
  1997年   59篇
  1996年   62篇
  1995年   60篇
  1994年   40篇
  1993年   24篇
  1992年   38篇
  1991年   23篇
  1990年   23篇
  1989年   14篇
  1988年   14篇
  1987年   10篇
  1986年   6篇
  1985年   6篇
  1984年   9篇
  1983年   5篇
  1980年   1篇
  1979年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   
92.
Donor imputation is frequently used in surveys. However, very few variance estimation methods that take into account donor imputation have been developed in the literature. This is particularly true for surveys with high sampling fractions using nearest donor imputation, often called nearest‐neighbour imputation. In this paper, the authors develop a variance estimator for donor imputation based on the assumption that the imputed estimator of a domain total is approximately unbiased under an imputation model; that is, a model for the variable requiring imputation. Their variance estimator is valid, irrespective of the magnitude of the sampling fractions and the complexity of the donor imputation method, provided that the imputation model mean and variance are accurately estimated. They evaluate its performance in a simulation study and show that nonparametric estimation of the model mean and variance via smoothing splines brings robustness with respect to imputation model misspecifications. They also apply their variance estimator to real survey data when nearest‐neighbour imputation has been used to fill in the missing values. The Canadian Journal of Statistics 37: 400–416; 2009 © 2009 Statistical Society of Canada  相似文献   
93.
To enhance modeling flexibility, the authors propose a nonparametric hazard regression model, for which the ordinary and weighted least squares estimation and inference procedures are studied. The proposed model does not assume any parametric specifications on the covariate effects, which is suitable for exploring the nonlinear interactions between covariates, time and some exposure variable. The authors propose the local ordinary and weighted least squares estimators for the varying‐coefficient functions and establish the corresponding asymptotic normality properties. Simulation studies are conducted to empirically examine the finite‐sample performance of the new methods, and a real data example from a recent breast cancer study is used as an illustration. The Canadian Journal of Statistics 37: 659–674; 2009 © 2009 Statistical Society of Canada  相似文献   
94.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
95.
Abstract. We consider a stochastic process driven by diffusions and jumps. Given a discrete record of observations, we devise a technique for identifying the times when jumps larger than a suitably defined threshold occurred. This allows us to determine a consistent non‐parametric estimator of the integrated volatility when the infinite activity jump component is Lévy. Jump size estimation and central limit results are proved in the case of finite activity jumps. Some simulations illustrate the applicability of the methodology in finite samples and its superiority on the multipower variations especially when it is not possible to use high frequency data.  相似文献   
96.
制造业的发展关系到第二产业乃至整个国民经济的发展,制造业的增长是中国工业经济增长的主导力量。本文运用数据包络分析法,测度了制造业各行业及中国各省制造业效率情况,然后利用中国省级制造业效率指标以及专利申请、地区经济发展水平等制造业效率影响因素的外延数据构成面板数据(Panel Data)建立模型。通过实证分析得出:地区经济的发展有利于制造业效率的提升。专利申请与制造业效率呈负相关。  相似文献   
97.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
98.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation (MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their integration.  相似文献   
99.
基于核和灰度的双重异构数据序列预测建模方法研究   总被引:1,自引:2,他引:1  
通过建立灰色异构数据"核"序列的DGM(1,1)模型,实现双重异构数据"核"的预测;以"核"为基础、以双重异构数据序列中较大的区间灰数信息域作为预测结果的信息域,构建基于区间灰数与实数的双重异构数据序列灰色预测模型,有效地将灰色预测模型建模对象从"同质数据"拓展至"双重异构数据"。研究成果对丰富灰色预测模型理论体系具有积极意义。  相似文献   
100.
Here we consider wavelet-based identification and estimation of a censored nonparametric regression model via block thresholding methods and investigate their asymptotic convergence rates. We show that these estimators, based on block thresholding of empirical wavelet coefficients, achieve optimal convergence rates over a large range of Besov function classes, and in particular enjoy those rates without the extraneous logarithmic penalties that are usually suffered by term-by-term thresholding methods. This work is extension of results in Li et al. (2008). The performance of proposed estimator is investigated by a numerical study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号