首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   392篇
  免费   7篇
  国内免费   4篇
管理学   55篇
人口学   21篇
丛书文集   12篇
理论方法论   21篇
综合类   55篇
社会学   37篇
统计学   202篇
  2023年   1篇
  2022年   2篇
  2021年   3篇
  2020年   3篇
  2019年   12篇
  2018年   14篇
  2017年   17篇
  2016年   7篇
  2015年   15篇
  2014年   17篇
  2013年   98篇
  2012年   22篇
  2011年   9篇
  2010年   14篇
  2009年   12篇
  2008年   24篇
  2007年   16篇
  2006年   24篇
  2005年   18篇
  2004年   12篇
  2003年   7篇
  2002年   4篇
  2001年   7篇
  2000年   7篇
  1999年   3篇
  1998年   4篇
  1997年   5篇
  1996年   3篇
  1995年   2篇
  1993年   1篇
  1992年   2篇
  1991年   2篇
  1990年   1篇
  1989年   3篇
  1988年   3篇
  1987年   2篇
  1985年   3篇
  1984年   1篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
排序方式: 共有403条查询结果,搜索用时 0 毫秒
41.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   
42.
In Markov chain Monte Carlo analysis, rapid convergence of the chain to its target distribution is crucial. A chain that converges geometrically quickly is geometrically ergodic. We explore geometric ergodicity for two-component Gibbs samplers (GS) that, under a chosen scanning strategy, evolve through one-at-a-time component-wise updates. We consider three such strategies: composition, random sequence, and random scans. We show that if any one of these scans produces a geometrically ergodic GS, so too do the others. Further, we provide a simple set of sufficient conditions for the geometric ergodicity of the GS. We illustrate our results using two examples.  相似文献   
43.
It i s well known that even if the sample observations are correlated and not normal, the sample mean is normal in 1arge samples. But how large is large? This question i s investigated in this paper. In particular , the relation between the rate of convergence and the correlation property of the observations i s explored. It i s observed that the correlation, in general, retards the rate of convergence.  相似文献   
44.
We propose a novel observation-driven finite mixture model for the study of banking data. The model accommodates time-varying component means and covariance matrices, normal and Student’s t distributed mixtures, and economic determinants of time-varying parameters. Monte Carlo experiments suggest that units of interest can be classified reliably into distinct components in a variety of settings. In an empirical study of 208 European banks between 2008Q1–2015Q4, we identify six business model components and discuss how their properties evolve over time. Changes in the yield curve predict changes in average business model characteristics.  相似文献   
45.
The cost and time consumption of many industrial experimentations can be reduced using the class of supersaturated designs since this can be used for screening out the important factors from a large set of potentially active variables. A supersaturated design is a design for which there are fewer runs than effects to be estimated. Although there exists a wide study of construction methods for supersaturated designs, their analysis methods are yet in an early research stage. In this article, we propose a method for analyzing data using a correlation-based measure, named as symmetrical uncertainty. This method combines measures from the information theory field and is used as the main idea of variable selection algorithms developed in data mining. In this work, the symmetrical uncertainty is used from another viewpoint in order to determine more directly the important factors. The specific method enables us to use supersaturated designs for analyzing data of generalized linear models for a Bernoulli response. We evaluate our method by using some of the existing supersaturated designs, obtained according to methods proposed by Tang and Wu (1997 Tang , B. , Wu , C. F. J. (1997). A method for constructing supersaturated designs and its E(s 2)-optimality. Canadian Journal of Statistics 25:191201.[Crossref], [Web of Science ®] [Google Scholar]) as well as by Koukouvinos et al. (2008 Koukouvinos , C. , Mylona , K. , Simos , D. E. ( 2008 ). E(s 2)-optimal and minimax-optimal cyclic supersaturated designs via multi-objective simulated annealing . Journal of Statistical Planning and Inference 138 : 16391646 .[Crossref], [Web of Science ®] [Google Scholar]). The comparison is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, Receiver Operating Characteristics (ROC) curves methodology is applied as an additional statistical tool for performance evaluation.  相似文献   
46.
The problem of estimating the difference between two Poisson means is considered. A new moment confidence interval (CI), and a fiducial CI for the difference between the means are proposed. The moment CI is simple to compute, and it specializes to the classical Wald CI when the sample sizes are equal. Numerical studies indicate that the moment CI offers improvement over the Wald CI when the sample sizes are different. Exact properties of the CIs based on the moment, fiducial and hybrid methods are evaluated numerically. Our numerical study indicates that the hybrid and fiducial CIs are in general comparable, and the moment CI seems to be the best when the expected total counts from both distributions are two or more. The interval estimation procedures are illustrated using two examples.  相似文献   
47.
We examine the sizes and powers of three tests of convergence of Markov Chain Monte Carlo draws: the Kolmogorov–Smirnov test, fluctuation test, and Geweke's test. We show that the sizes and powers are sensitive to the existence of autocorrelation in the draws. We propose a filtered test that is corrected for autocorrelation. We present a numerical illustration using the Federal funds rate.  相似文献   
48.
当前衡量我国城镇化水平的指标有两个:一是户籍人口城镇化率;二是常住人口城镇化率。常住人口和户籍人口之间,横隔着庞大的农民工群体。双重城镇化率的存在,源于与资本原始积累相依存的城乡二元体制与要素价格剪刀差。通过该制度的设计,人为地降低工业化、城镇化的成本,以加快我国工业化、城镇化步伐。在农村劳动力供大于求的背景下,这种城镇化模式还有发展的空间,当农村劳动力供求出现“拐点”,这种城镇化模式就难以为继,客观上要求必须用“以人为本”的新型城镇化取代传统的城镇化。用户籍人口城镇化取代常住人口城镇化,是城镇化的必然趋势。我国对外来人口实行居住证制度,作为农民工市民化的过渡性措施,或作为提高户籍人口城镇化率的过渡性措施。但对于外来人口来说,其利益最大化的模式,是既保留农村户籍又持有城镇居住证。这就决定了双重城镇化率将存在很长时间。  相似文献   
49.
自 1996年来 ,为了刺激消费 ,扩大内需 ,促进经济的复苏与繁荣 ,中国人民银行连续八次降低金融机构人民币存贷款利率 ,但我国储蓄依然居高不下 ,降息的政策效果并不明显。该文认为 ,全球经济萧条是外因 ,而经济主体 ,如居民、企业、银行、政府的行为才是导致降息低效的根本原因 ,因此 ,解决这一现实问题需要全社会共同努力 ,将降息与其它政策配套使用才能行之有效。  相似文献   
50.
Two indices of creatinine clearance (an index of kidney function) are compared on a group of cancer patients who underwent chemotherapy with a potentially nephrotoxic drug. The standard index, measured creatinine clearance MCC, is cumbersome to use, whereas the more convenient alternative, estimated creatinine clearance ECC, has not yet been conclusively evaluated on cancer patients. We conclude that under certain clinical conditions ECC and MCC are identically calibrated for males, but not for females, and we obtain estimated true and false positive rates for assessing the use of ECC instead of MCC as a diagnostic tool. We use a model that is formally equivalent to an errors-in-variables model with (unbalanced) repeated observations and correlated measurement errors. The bootstrap is used to obtain standard errors and confidence limits.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号