全文获取类型
收费全文 | 673篇 |
免费 | 15篇 |
国内免费 | 1篇 |
专业分类
管理学 | 68篇 |
民族学 | 16篇 |
人口学 | 38篇 |
丛书文集 | 72篇 |
理论方法论 | 10篇 |
综合类 | 192篇 |
社会学 | 80篇 |
统计学 | 213篇 |
出版年
2023年 | 2篇 |
2022年 | 5篇 |
2021年 | 9篇 |
2020年 | 11篇 |
2019年 | 8篇 |
2018年 | 10篇 |
2017年 | 29篇 |
2016年 | 17篇 |
2015年 | 11篇 |
2014年 | 28篇 |
2013年 | 143篇 |
2012年 | 47篇 |
2011年 | 45篇 |
2010年 | 38篇 |
2009年 | 24篇 |
2008年 | 28篇 |
2007年 | 32篇 |
2006年 | 32篇 |
2005年 | 17篇 |
2004年 | 21篇 |
2003年 | 27篇 |
2002年 | 28篇 |
2001年 | 20篇 |
2000年 | 14篇 |
1999年 | 7篇 |
1998年 | 6篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 8篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1992年 | 2篇 |
1990年 | 3篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1985年 | 2篇 |
1983年 | 1篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1966年 | 1篇 |
排序方式: 共有689条查询结果,搜索用时 156 毫秒
81.
Nitis Mukhopadhyay 《统计学通讯:理论与方法》2013,42(7):1283-1297
The purpose of this article is two-fold. First, we find it very interesting to explore a kind of notion of optimality of the customary Jensen-bound among all Jensen-type bounds. Without this result, the customary Jensen-bound stood alone simply as just another bound. The proposed notion and the associated optimality are important given that in some situations the Jensen's inequality does leave us empty handed. When it comes to highlighting Jensen's inequality, unfortunately only a handful of nearly routine applications continues to recycle time after time. Such encounters rarely produce any excitement. This article may change that outlook given its second underlying purpose, which is to introduce a variety of unusual applications of Jensen's inequality. The collection of our important and useful applications and their derivations are new. 相似文献
82.
The operating characteristic curves of certain known sigma variables sampling plans may not be satisfactory in that they have a tendency to reject even lots of acceptable quality. This note presents the theory and a method to identify such known sigma variables plans possessing unsatisfactory operating characteristic curves. 相似文献
83.
We wish to test the null hypothesis if the means of N panels remain the same during the observation period of length T. A quasi-likelihood argument leads to self-normalized statistics whose limit distribution under the null hypothesis is double exponential. The main results are derived assuming that the each panel is based on independent observations and then extended to linear processes. The proofs are based on an approximation of the sum of squared CUSUM processes using the Skorokhod embedding scheme. A simulation study illustrates that our results can be used in case of small and moderate N and T. We apply our results to detect change in the “corruption index”. 相似文献
84.
We develop a likelihood ratio test for an abrupt change point in Weibull hazard functions with covariates, including the two-piece constant hazard as a special case. We first define the log-likelihood ratio test statistic as the supremum of the profile log-likelihood ratio process over the interval which may contain an unknown change point. Using local asymptotic normality (LAN) and empirical measure, we show that the profile log-likelihood ratio process converges weakly to a quadratic form of Gaussian processes. We determine the critical values of the test and discuss how the test can be used for model selection. We also illustrate the method using the Chronic Granulomatous Disease (CGD) data. 相似文献
85.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929. 相似文献
86.
Robert D. Brooks 《Econometric Reviews》2013,32(1):35-53
The literature on testing for the presence of Rosenberg's (1973) return to normalcy random coefficient model is well developed with both Shively (1988) and Brooks (1993) advocating the use of point optimal tests. This paper explores the robustness of point optimal testing for the Rosenberg alternative to two departures: the special case HildrethHouck (1968) alternative and non-normality in regression disturbances, finding the point optimal testing approach to be fairly robust to both departures. 相似文献
87.
内生经济增长理论向来关注技术进步对经济增长的贡献,但现代研究却普遍忽视技术进步对异质性要素发展可能产生的偏向性影响,特别是技术进步能否呈现技能偏向性并引致不同类型劳动者报酬分化问题。本文利用双层嵌套型CES生产函数和非线性似不相关方法估计中国技能溢价水平,研究发现我国资本和劳动替代弹性小于1而技能和非技能劳动替代弹性大于1,技术进步偏向性及技能和非技能劳动的替代效应明显,利用技术进步偏向性模型模拟的数据与真实值无明显差异,印证技能溢价源于偏向型技术进步且偏向性效应不断强化。同时,回归方法检验结果也发现,技术进步偏向性对技能溢价正向效应显著,验证出我国技能溢价现象主要是源于技术进步偏向性作用的结果。 相似文献
88.
R. Hasan Abadi 《统计学通讯:模拟与计算》2013,42(8):1430-1443
Censored data arise naturally in a number of fields, particularly in problems of reliability and survival analysis. There are several types of censoring, in this article, we will confine ourselves to the right randomly censoring type. Recently, Ahmadi et al. (2010) considered the problem of estimating unknown parameters in a general framework based on the right randomly censored data. They assumed that the survival function of the censoring time is free of the unknown parameter. This assumption is sometimes inappropriate. In such cases, a proportional odds (PO) model may be more appropriate (Lam and Leung, 2001). Under this model, in this article, point and interval estimations for the unknown parameters are obtained. Since it is important to check the adequacy of models upon which inferences are based (Lawless, 2003, p. 465), two new goodness-of-fit tests for PO model based on right randomly censored data are proposed. The proposed procedures are applied to two real data sets due to Smith (2002). A Monte Carlo simulation study is conducted to carry out the behavior of the estimators obtained. 相似文献
89.
Zheng Su 《统计学通讯:模拟与计算》2013,42(5):611-620
In the analysis of time-to-event data, restricted mean survival time has been well investigated in the literature and provided by many commercial software packages, while calculating mean survival time remains as a challenge due to censoring or insufficient follow-up time. Several researchers have proposed a hybrid estimator of mean survival based on the Kaplan–Meier curve with an extrapolated tail. However, this approach often leads to biased estimate due to poor estimate of the parameters in the extrapolated “tail” and the large variability associated with the tail of the Kaplan–Meier curve due to small set of patients at risk. Two key challenges in this approach are (1) where the extrapolation should start and (2) how to estimate the parameters for the extrapolated tail. The authors propose a novel approach to calculate mean survival time to address these two challenges. In the proposed approach, an algorithm is used to search if there are any time points where the hazard rates change significantly. The survival function is estimated by the Kaplan–Meier method prior to the last change point and approximated by an exponential function beyond the last change point. The parameter in the exponential function is estimated locally. Mean survival time is derived based on this survival function. The simulation and case studies demonstrated the superiority of the proposed approach. 相似文献
90.
B. Picinbono 《统计学通讯:模拟与计算》2013,42(1):90-106
A time point process can be defined either by the statistical properties of the time intervals between successive points or by those of the number of points in arbitrary time intervals. There are mathematical expressions to link up these two points of view, but they are in many cases too complicated to be used in practice. In this article, we present an algorithmic procedure to obtain the number of points of a stationary point process recorded in some time intervals by processing the values of the distances between successive points. We present some results concerning the statistical analysis of these numbers of points and when analytical calculations are possible the experimental results obtained with our algorithms are in excellent agreement with those predicted by the theory. Some properties of point processes in which theoretical calculations are almost impossible are also presented. 相似文献