首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2012篇
  免费   37篇
  国内免费   14篇
管理学   135篇
民族学   2篇
人口学   22篇
丛书文集   104篇
理论方法论   71篇
综合类   838篇
社会学   74篇
统计学   817篇
  2024年   2篇
  2023年   6篇
  2022年   12篇
  2021年   26篇
  2020年   34篇
  2019年   50篇
  2018年   40篇
  2017年   73篇
  2016年   47篇
  2015年   50篇
  2014年   77篇
  2013年   401篇
  2012年   119篇
  2011年   89篇
  2010年   83篇
  2009年   81篇
  2008年   83篇
  2007年   108篇
  2006年   96篇
  2005年   97篇
  2004年   105篇
  2003年   72篇
  2002年   73篇
  2001年   65篇
  2000年   43篇
  1999年   27篇
  1998年   13篇
  1997年   19篇
  1996年   8篇
  1995年   9篇
  1994年   5篇
  1993年   7篇
  1992年   6篇
  1991年   6篇
  1990年   10篇
  1989年   3篇
  1988年   4篇
  1986年   2篇
  1985年   2篇
  1984年   2篇
  1983年   3篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有2063条查询结果,搜索用时 15 毫秒
101.
Zusammenfassung: In dieser Studie wird ein Konzept zur Kumulation von laufenden Haushaltsbudgetbefragungen im Rahmen des Projektes Amtliche Statistik und sozioökonomische Fragestellungen entwickelt und zur Diskussion gestellt. Dafür werden die theoretischen Grundlagen und Bausteine gelegt und die zentrale Aufgabe einer strukturellen demographischen Gewichtung mit einem Hochrechnungs–/Kalibrierungsansatz auf informationstheoretischer Basis gelöst.Vor dem Hintergrund der Wirtschaftsrechnungen des Statistischen Bundesamtes (Lfd. Wirtschaftsrechnungen und EVS) wird darauf aufbauend ein konkretes Konzept für die Kumulation von jährlichen Haushaltsbudgetbefragungen vorgeschlagen. Damit kann das Ziel einer Kumulation von Querschnitten mit einer umfassenderen Kumulationsstichprobe für tief gegliederte Analysen erreicht werden. Folgen sollen die Simulationsrechnungen zur Evaluation des Konzepts.
Summary: In this study a concept for cumulating periodic household surveys within the frame of the project Official Statistics and Socio–Economic Questions is developed and asks for discussion. We develop the theoretical background and solve the central task of a structural demographic weighting/calibration based on an information theoretical approach.Based on the household budget surveys of the Federal Statistical Office (Periodic Household Budget Surveys and Income and Consumption Sample (EVS)) a practical concept is proposed to cumulate yearly household surveys. This allows a cumulation of cross–sections by a comprehensive cumulated sample for deeply structured analyses. In a following study this concept shall be evaluated.
  相似文献   
102.
In this paper, we consider the problem of model robust design for simultaneous parameter estimation among a class of polynomial regression models with degree up to k. A generalized D-optimality criterion, the Ψα‐optimality criterion, first introduced by Läuter (1974) is considered for this problem. By applying the theory of canonical moments and the technique of maximin principle, we derive a model robust optimal design in the sense of having highest minimum Ψα‐efficiency. Numerical comparison indicates that the proposed design has remarkable performance for parameter estimation in all of the considered rival models.  相似文献   
103.
A Bayesian method for estimating a time-varying regression model subject to the presence of structural breaks is proposed. Heteroskedastic dynamics, via both GARCH and stochastic volatility specifications, and an autoregressive factor, subject to breaks, are added to generalize the standard return prediction model, in order to efficiently estimate and examine the relationship and how it changes over time. A Bayesian computational method is employed to identify the locations of structural breaks, and for estimation and inference, simultaneously accounting for heteroskedasticity and autocorrelation. The proposed methods are illustrated using simulated data. Then, an empirical study of the Taiwan and Hong Kong stock markets, using oil and gas price returns as a state variable, provides strong support for oil prices being an important explanatory variable for stock returns.  相似文献   
104.
Prostate cancer (PrCA) is the most common cancer diagnosed in American men and the second leading cause of death from malignancies. There are large geographical variation and racial disparities existing in the survival rate of PrCA. Much work on the spatial survival model is based on the proportional hazards (PH) model, but few focused on the accelerated failure time (AFT) model. In this paper, we investigate the PrCA data of Louisiana from the Surveillance, Epidemiology, and End Results program and the violation of the PH assumption suggests that the spatial survival model based on the AFT model is more appropriate for this data set. To account for the possible extra-variation, we consider spatially referenced independent or dependent spatial structures. The deviance information criterion is used to select a best-fitting model within the Bayesian frame work. The results from our study indicate that age, race, stage, and geographical distribution are significant in evaluating PrCA survival.  相似文献   
105.
In this paper we propose a computationally efficient algorithm to estimate the parameters of a 2-D sinusoidal model in the presence of stationary noise. The estimators obtained by the proposed algorithm are consistent and asymptotically equivalent to the least squares estimators. Monte Carlo simulations are performed for different sample sizes and it is observed that the performances of the proposed method are quite satisfactory and they are equivalent to the least squares estimators. The main advantage of the proposed method is that the estimators can be obtained using only finite number of iterations. In fact it is shown that starting from the average of periodogram estimators, the proposed algorithm converges in three steps only. One synthesized texture data and one original texture data have been analyzed using the proposed algorithm for illustrative purpose.  相似文献   
106.
The problem of sample size determination in the context of Bayesian analysis is considered. For the familiar and practically important parameter of a geometric distribution with a beta prior, three different Bayesian approaches based on the highest posterior density intervals are discussed. A computer program handles all computational complexities and is available upon request.  相似文献   
107.
We consider the problem of constructing nonlinear regression models with Gaussian basis functions, using lasso regularization. Regularization with a lasso penalty is an advantageous in that it estimates some coefficients in linear regression models to be exactly zero. We propose imposing a weighted lasso penalty on a nonlinear regression model and thereby selecting the number of basis functions effectively. In order to select tuning parameters in the regularization method, we use a deviance information criterion proposed by Spiegelhalter et al. (2002), calculating the effective number of parameters by Gibbs sampling. Simulation results demonstrate that our methodology performs well in various situations.  相似文献   
108.
109.
The banks have been accumulating huge data bases for many years and are increasingly turning to statistics to provide insight into customer behaviour, among other things. Credit risk is an important issue and certain stochastic models have been developed in recent years to describe and predict loan default. Two of the major models currently used in the industry are considered here, and various ways of extending their application to the case where a loan is repaid in installments are explored. The aspect of interest is the probability distribution of the total loss due to repayment default at some time. Thus, the loss distribution is determined by the distribution of times to default, here regarded as a discrete-time survival distribution. In particular, the probabilities of large losses are to be assessed for insurance purposes.  相似文献   
110.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号