首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1065篇
  免费   44篇
  国内免费   3篇
管理学   46篇
民族学   1篇
人口学   50篇
丛书文集   30篇
理论方法论   14篇
综合类   59篇
社会学   43篇
统计学   869篇
  2023年   4篇
  2022年   10篇
  2021年   11篇
  2020年   23篇
  2019年   51篇
  2018年   47篇
  2017年   69篇
  2016年   33篇
  2015年   38篇
  2014年   47篇
  2013年   191篇
  2012年   148篇
  2011年   44篇
  2010年   42篇
  2009年   40篇
  2008年   42篇
  2007年   31篇
  2006年   25篇
  2005年   30篇
  2004年   22篇
  2003年   15篇
  2002年   23篇
  2001年   23篇
  2000年   20篇
  1999年   18篇
  1998年   15篇
  1997年   4篇
  1996年   9篇
  1995年   3篇
  1994年   3篇
  1993年   8篇
  1992年   11篇
  1991年   4篇
  1990年   3篇
  1989年   4篇
  1981年   1篇
排序方式: 共有1112条查询结果,搜索用时 171 毫秒
1.
2.
《Journal of Policy Modeling》2022,44(6):1251-1279
Despite the existence of a burgeoning literature on bank profitability, yet, none of them gave due consideration to geographical proximity. I fulfill such a gap by analyzing the effects of COVID-19 on the profitability of top-rated banks. Findings confirm the prevalence of spatial dependence at both the global and sub-global with feedback effects being systematically higher than spillover effects. My study uncovers evidence of a COVID-19 induced decline in asset utilization. Findings advocate sharing economy as a potential tool to banks in combating any future pandemic risk with regionalized approach to supervision being deemed better than its globalized counterpart.  相似文献   
3.
Abstract

This paper focuses on the inference of suitable generally non linear functions in stochastic volatility models. In this context, in order to estimate the variance of the proposed estimators, a moving block bootstrap (MBB) approach is suggested and discussed. Under mild assumptions, we show that the MBB procedure is weakly consistent. Moreover, a methodology to choose the optimal length block in the MBB is proposed. Some examples and simulations on the model are also made to show the performance of the proposed procedure.  相似文献   
4.
Random effects regression mixture models are a way to classify longitudinal data (or trajectories) having possibly varying lengths. The mixture structure of the traditional random effects regression mixture model arises through the distribution of the random regression coefficients, which is assumed to be a mixture of multivariate normals. An extension of this standard model is presented that accounts for various levels of heterogeneity among the trajectories, depending on their assumed error structure. A standard likelihood ratio test is presented for testing this error structure assumption. Full details of an expectation-conditional maximization algorithm for maximum likelihood estimation are also presented. This model is used to analyze data from an infant habituation experiment, where it is desirable to assess whether infants comprise different populations in terms of their habituation time.  相似文献   
5.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
6.
Modelling daily multivariate pollutant data at multiple sites   总被引:7,自引:1,他引:6  
Summary. This paper considers the spatiotemporal modelling of four pollutants measured daily at eight monitoring sites in London over a 4-year period. Such multiple-pollutant data sets measured over time at multiple sites within a region of interest are typical. Here, the modelling was carried out to provide the exposure for a study investigating the health effects of air pollution. Alternative objectives include the design problem of the positioning of a new monitoring site, or for regulatory purposes to determine whether environmental standards are being met. In general, analyses are hampered by missing data due, for example, to a particular pollutant not being measured at a site, a monitor being inactive by design (e.g. a 6-day monitoring schedule) or because of an unreliable or faulty monitor. Data of this type are modelled here within a dynamic linear modelling framework, in which the dependences across time, space and pollutants are exploited. Throughout the approach is Bayesian, with implementation via Markov chain Monte Carlo sampling.  相似文献   
7.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   
8.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
9.
The authors extend the block external bootstrap to partially linear regression models with strongly mixing, nonstationary error terms. In addition to providing an approximate distribution for the semiparametric least square estimator of the parametric component, they propose a consistent estimator of the co‐variance matrix of this estimator.  相似文献   
10.
The Urban Geographical Information Systems (GIS) Group within the Department of Civil Engineering at the University of Cape Town has been coordinating a pilot informal settlement upgrading in Cape Town since 1998. The project objective has been the evolution of a model-based approach to informal settlement upgrading that is both structured and replicable. It was felt that the only way this could be achieved was through the use of a spatial data management system operated through a GIS system. The spatial database has been used for all facets of data collection and data process and forms the basis for all decision-making. Thus it covers all physical data pertaining to the site, cadastral and shack data, demographic and socio-economic data (with an in-depth review of every household) economic opportunities and physical planning and design data. The result is a comprehensive, integrated, settlement upgrading methodology that is built upon a GIS-based spatial data management framework. Such a framework is seen as the basic building block for large-scale informal settlement upgrading.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号