首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   673篇
  免费   15篇
  国内免费   1篇
管理学   68篇
民族学   16篇
人口学   38篇
丛书文集   72篇
理论方法论   10篇
综合类   192篇
社会学   80篇
统计学   213篇
  2023年   2篇
  2022年   5篇
  2021年   9篇
  2020年   11篇
  2019年   8篇
  2018年   10篇
  2017年   29篇
  2016年   17篇
  2015年   11篇
  2014年   28篇
  2013年   143篇
  2012年   47篇
  2011年   45篇
  2010年   38篇
  2009年   24篇
  2008年   28篇
  2007年   32篇
  2006年   32篇
  2005年   17篇
  2004年   21篇
  2003年   27篇
  2002年   28篇
  2001年   20篇
  2000年   14篇
  1999年   7篇
  1998年   6篇
  1997年   3篇
  1996年   1篇
  1995年   8篇
  1994年   1篇
  1993年   2篇
  1992年   2篇
  1990年   3篇
  1989年   1篇
  1988年   1篇
  1985年   2篇
  1983年   1篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1966年   1篇
排序方式: 共有689条查询结果,搜索用时 156 毫秒
81.
The purpose of this article is two-fold. First, we find it very interesting to explore a kind of notion of optimality of the customary Jensen-bound among all Jensen-type bounds. Without this result, the customary Jensen-bound stood alone simply as just another bound. The proposed notion and the associated optimality are important given that in some situations the Jensen's inequality does leave us empty handed.

When it comes to highlighting Jensen's inequality, unfortunately only a handful of nearly routine applications continues to recycle time after time. Such encounters rarely produce any excitement. This article may change that outlook given its second underlying purpose, which is to introduce a variety of unusual applications of Jensen's inequality. The collection of our important and useful applications and their derivations are new.  相似文献   
82.
The operating characteristic curves of certain known sigma variables sampling plans may not be satisfactory in that they have a tendency to reject even lots of acceptable quality. This note presents the theory and a method to identify such known sigma variables plans possessing unsatisfactory operating characteristic curves.  相似文献   
83.
We wish to test the null hypothesis if the means of N panels remain the same during the observation period of length T. A quasi-likelihood argument leads to self-normalized statistics whose limit distribution under the null hypothesis is double exponential. The main results are derived assuming that the each panel is based on independent observations and then extended to linear processes. The proofs are based on an approximation of the sum of squared CUSUM processes using the Skorokhod embedding scheme. A simulation study illustrates that our results can be used in case of small and moderate N and T. We apply our results to detect change in the “corruption index”.  相似文献   
84.
We develop a likelihood ratio test for an abrupt change point in Weibull hazard functions with covariates, including the two-piece constant hazard as a special case. We first define the log-likelihood ratio test statistic as the supremum of the profile log-likelihood ratio process over the interval which may contain an unknown change point. Using local asymptotic normality (LAN) and empirical measure, we show that the profile log-likelihood ratio process converges weakly to a quadratic form of Gaussian processes. We determine the critical values of the test and discuss how the test can be used for model selection. We also illustrate the method using the Chronic Granulomatous Disease (CGD) data.  相似文献   
85.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   
86.
The literature on testing for the presence of Rosenberg's (1973) return to normalcy random coefficient model is well developed with both Shively (1988) and Brooks (1993) advocating the use of point optimal tests. This paper explores the robustness of point optimal testing for the Rosenberg alternative to two departures: the special case HildrethHouck (1968) alternative and non-normality in regression disturbances, finding the point optimal testing approach to be fairly robust to both departures.  相似文献   
87.
技能溢价源于技术进步偏向性吗?   总被引:1,自引:0,他引:1       下载免费PDF全文
 内生经济增长理论向来关注技术进步对经济增长的贡献,但现代研究却普遍忽视技术进步对异质性要素发展可能产生的偏向性影响,特别是技术进步能否呈现技能偏向性并引致不同类型劳动者报酬分化问题。本文利用双层嵌套型CES生产函数和非线性似不相关方法估计中国技能溢价水平,研究发现我国资本和劳动替代弹性小于1而技能和非技能劳动替代弹性大于1,技术进步偏向性及技能和非技能劳动的替代效应明显,利用技术进步偏向性模型模拟的数据与真实值无明显差异,印证技能溢价源于偏向型技术进步且偏向性效应不断强化。同时,回归方法检验结果也发现,技术进步偏向性对技能溢价正向效应显著,验证出我国技能溢价现象主要是源于技术进步偏向性作用的结果。  相似文献   
88.
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010 Ahmadi , J. , Doostparast , M. , Parsian , A. ( 2010 ). Bayes estimation based on random censored data for some life time models under symmetric and asymmetric loss functions . Communcations in Statistics-Theory and Methods , 39 : 30583071 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001 Lam , K. F. , Leung , T. L. ( 2001 ). Marginal likelihood estimation for proportional odds models with right censored data . Lifetime Data Analysis 7 : 3954 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003 Lawless , J. F. (2003). Statistical Models and Methods for Lifetime Data. , 2nd ed. New York : John Wiley & Sons. [Google Scholar], p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002 Smith , P. J. ( 2002 ). Analysis of Failure and Survival Data . London : Chapman & Hall, CRC . [Google Scholar]). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained.  相似文献   
89.
In the analysis of time-to-event data, restricted mean survival time has been well investigated in the literature and provided by many commercial software packages, while calculating mean survival time remains as a challenge due to censoring or insufficient follow-up time. Several researchers have proposed a hybrid estimator of mean survival based on the Kaplan–Meier curve with an extrapolated tail. However, this approach often leads to biased estimate due to poor estimate of the parameters in the extrapolated “tail” and the large variability associated with the tail of the Kaplan–Meier curve due to small set of patients at risk. Two key challenges in this approach are (1) where the extrapolation should start and (2) how to estimate the parameters for the extrapolated tail. The authors propose a novel approach to calculate mean survival time to address these two challenges. In the proposed approach, an algorithm is used to search if there are any time points where the hazard rates change significantly. The survival function is estimated by the Kaplan–Meier method prior to the last change point and approximated by an exponential function beyond the last change point. The parameter in the exponential function is estimated locally. Mean survival time is derived based on this survival function. The simulation and case studies demonstrated the superiority of the proposed approach.  相似文献   
90.
A time point process can be defined either by the statistical properties of the time intervals between successive points or by those of the number of points in arbitrary time intervals. There are mathematical expressions to link up these two points of view, but they are in many cases too complicated to be used in practice. In this article, we present an algorithmic procedure to obtain the number of points of a stationary point process recorded in some time intervals by processing the values of the distances between successive points. We present some results concerning the statistical analysis of these numbers of points and when analytical calculations are possible the experimental results obtained with our algorithms are in excellent agreement with those predicted by the theory. Some properties of point processes in which theoretical calculations are almost impossible are also presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号