首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1916篇
  免费   73篇
管理学   326篇
民族学   12篇
人口学   190篇
丛书文集   19篇
理论方法论   148篇
综合类   133篇
社会学   879篇
统计学   282篇
  2023年   21篇
  2022年   15篇
  2021年   20篇
  2020年   61篇
  2019年   72篇
  2018年   77篇
  2017年   100篇
  2016年   78篇
  2015年   55篇
  2014年   70篇
  2013年   222篇
  2012年   222篇
  2011年   80篇
  2010年   58篇
  2009年   59篇
  2008年   53篇
  2007年   56篇
  2006年   42篇
  2005年   49篇
  2004年   102篇
  2003年   103篇
  2002年   77篇
  2001年   46篇
  2000年   32篇
  1999年   12篇
  1998年   11篇
  1997年   5篇
  1996年   14篇
  1995年   7篇
  1994年   8篇
  1993年   10篇
  1992年   16篇
  1991年   13篇
  1990年   11篇
  1989年   11篇
  1988年   9篇
  1987年   10篇
  1985年   5篇
  1984年   5篇
  1983年   5篇
  1982年   9篇
  1981年   9篇
  1980年   7篇
  1979年   4篇
  1978年   7篇
  1974年   6篇
  1972年   3篇
  1967年   2篇
  1966年   2篇
  1965年   2篇
排序方式: 共有1989条查询结果,搜索用时 109 毫秒
981.
Parameter estimation of the generalized Pareto distribution—Part II   总被引:1,自引:0,他引:1  
This is the second part of a paper which focuses on reviewing methods for estimating the parameters of the generalized Pareto distribution (GPD). The GPD is a very important distribution in the extreme value context. It is commonly used for modeling the observations that exceed very high thresholds. The ultimate success of the GPD in applications evidently depends on the parameter estimation process. Quite a few methods exist in the literature for estimating the GPD parameters. Estimation procedures, such as the maximum likelihood (ML), the method of moments (MOM) and the probability weighted moments (PWM) method were described in Part I of the paper. We shall continue to review methods for estimating the GPD parameters, in particular methods that are robust and procedures that use the Bayesian methodology. As in Part I, we shall focus on those that are relatively simple and straightforward to be applied to real world data.  相似文献   
982.
In this paper we deal with robust inference in heteroscedastic measurement error models. Rather than the normal distribution, we postulate a Student t distribution for the observed variables. Maximum likelihood estimates are computed numerically. Consistent estimation of the asymptotic covariance matrices of the maximum likelihood and generalized least squares estimators is also discussed. Three test statistics are proposed for testing hypotheses of interest with the asymptotic chi-square distribution which guarantees correct asymptotic significance levels. Results of simulations and an application to a real data set are also reported.  相似文献   
983.
In this article we use Monte Carlo analysis to assess the small sample behaviour of the OLS, the weighted least squares (WLS) and the mixed effects meta-estimators under several types of effect size heterogeneity, using the bias, the mean squared error and the size and power of the statistical tests as performance indicators. Specifically, we analyse the consequences of heterogeneity in effect size precision (heteroskedasticity) and of two types of random effect size variation, one where the variation holds for the entire sample, and one where only a subset of the sample of studies is affected. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations, but that WLS outperforms OLS and mixed effects in the third situation. Our findings therefore show that, under circumstances that are quite common in practice, using the mixed effects estimator may be suboptimal and that the use of WLS is preferable.  相似文献   
984.
We use time-series cross-section analysis to provide empirical validation for the existence of a specific American ethos and a specific European ethos with respect to economic policy. In our innovation, economic policy is proxied by “economic freedom” from the Fraser Institute database and constitutional “political institutions” are proxied by variables from the Database of Political Institutions (from the World Bank). Our results suggest that, once we control for political and institutional differences, the United States and Europe still pursue different economic policies.
Zane A. SpindlerEmail:

Zane A. Spindler   born in 1941, has a Ph.D. in economics (Michigan State University, 1968) and has been a professor in the Department of Economics, Simon Fraser University, since 1967. His current research interests include constitutional foundations of economic freedom, central bank governance, and the evolution of land contests. His works have been published in the Canadian Journal of Economics, Constitutional Political Economy, Oxford Economic Papers, Public Choice, Public Organizational Review, and South African Journal of Economics. Xavier de Vanssay   born in 1961, has a Ph.D. in economics (Simon Fraser University, 1992) and has been a professor in the Department of Economics, Glendon College, York University, since 1990. His current research interests include constitutional foundations of economic freedom, monetary institutions, and trade policy. His works have been published in the Journal of Economic Education, Public Finance Quarterly, Constitutional Political Economy, Public Choice, and South African Journal of Economics. Vincent Hildebrand   born in 1970, has a Ph.D. in economics (York University, 2001) and has been a professor in the Department of Economics, Glendon College, York University, since 2002. His current research interests explore disparities in the distribution of wealth across gender, race and ethnicity. His works have been published in the Journal of Human Resources, the Review of Income and Wealth, Social Science Quarterly, Constitutional Political Economy and Environmental and Resource Economics.  相似文献   
985.
A Bayesian model consists of two elements: a sampling model and a prior density. The problem of selecting a prior density is nothing but the problem of selecting a Bayesian model where the sampling model is fixed. A predictive approach is used through a decision problem where the loss function is the squared L 2 distance between the sampling density and the posterior predictive density, because the aim of the method is to choose the prior that provides a posterior predictive density as good as possible. An algorithm is developed for solving the problem; this algorithm is based on Lavine's linearization technique.  相似文献   
986.
In this paper we present a procedure for finding the optimal order of a response polynomial. the procedure is based on the prediction distribution of future observations. The maximal length of the structural β - expectation tolerance region for each polynomial is calculated. The minimun of these maximal determines the optimal order of the response polynomial  相似文献   
987.
We evaluate the effects of college choice on earnings using Swedish register databases. This case study is used to motivate the introduction of a novel procedure to analyse the sensitivity of such an observational study to the assumption made that there are no unobserved confounders – variables affecting both college choice and earnings. This assumption is not testable without further information, and should be considered an approximation of reality. To perform a sensitivity analysis, we measure the departure from the unconfoundedness assumption with the correlation between college choice and earnings when conditioning on observed covariates. The use of a correlation as a measure of dependence allows us to propose a standardised procedure by advocating the use of a fixed value for the correlation, typically 1% or 5%, when checking the sensitivity of an evaluation study. A correlation coefficient is, moreover, intuitive to most empirical scientists, which makes the results of our sensitivity analysis easier to communicate than those of previously proposed methods. In our evaluation of the effects of college choice on earnings, the significantly positive effect obtained could not be questioned by a sensitivity analysis allowing for unobserved confounders inducing at most 5% correlation between college choice and earnings.  相似文献   
988.
In this paper, we propose a general class of Gamma frailty transformation models for multivariate survival data. The transformation class includes the commonly used proportional hazards and proportional odds models. The proposed class also includes a family of cure rate models. Under an improper prior for the parameters, we establish propriety of the posterior distribution. A novel Gibbs sampling algorithm is developed for sampling from the observed data posterior distribution. A simulation study is conducted to examine the properties of the proposed methodology. An application to a data set from a cord blood transplantation study is also reported.  相似文献   
989.
This paper provides an introduction to utilities for statisticians working mainly in clinical research who have not had experience of health technology assessment work. Utility is the numeric valuation applied to a health state based on the preference of being in that state relative to perfect health. Utilities are often combined with survival data in health economic modelling to obtain quality‐adjusted life years. There are several methods available for deriving the preference weights and the health states to which they are applied, and combining them to estimate utilities, and the clinical statistician has valuable skills that can be applied in ensuring the robustness of the trial design, data collection and analyses to obtain and handle this data. In addition to raising awareness of the subject and providing source references, the paper outlines the concepts and approaches around utilities using examples, discusses some of the key issues, and proposes areas where statisticians can collaborate with health economic colleagues to improve the quality of this important element of health technology assessment. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
990.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号