首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2678篇
  免费   89篇
  国内免费   9篇
管理学   278篇
民族学   3篇
人口学   21篇
丛书文集   30篇
理论方法论   44篇
综合类   217篇
社会学   39篇
统计学   2144篇
  2023年   40篇
  2022年   38篇
  2021年   45篇
  2020年   46篇
  2019年   99篇
  2018年   114篇
  2017年   212篇
  2016年   97篇
  2015年   89篇
  2014年   124篇
  2013年   545篇
  2012年   223篇
  2011年   103篇
  2010年   83篇
  2009年   109篇
  2008年   84篇
  2007年   89篇
  2006年   79篇
  2005年   83篇
  2004年   74篇
  2003年   56篇
  2002年   49篇
  2001年   36篇
  2000年   41篇
  1999年   32篇
  1998年   31篇
  1997年   28篇
  1996年   13篇
  1995年   16篇
  1994年   16篇
  1993年   8篇
  1992年   14篇
  1991年   14篇
  1990年   5篇
  1989年   8篇
  1988年   7篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有2776条查询结果,搜索用时 0 毫秒
251.
讨论了数控测量数据的一般处理方法及复杂零件型面多测量点数据对数控加工的影响.研究了利用圆弧样条拟合测量数据和生成加工刀位轨迹的方法.  相似文献   
252.
A cohort analysis of female labor participation rates in the U.S. and Japan   总被引:2,自引:1,他引:1  
Aggregate data of female labor participation rates in U.S. and Japan, classified by period and by age, are decomposed into age, period, and cohort effects using innovative Bayesian cohort models that were developed to overcome the identification problem in cohort analysis. The main findings are that in both countries, age effects are the largest and period effects are the smallest; in both countries, age effects are roughly consistent with life-cycle movements expected by labor economics, but the negative effects of marriage and/or childbearing on women?’s labor supply in Japan are much larger than those observed in the U.S.; and in both countries, upward movements of cohort effects during 1930s–1960s were found. However, cohort effects are larger for the U.S. than for Japan. All the cohort results are roughly consistent with the marriage squeeze hypothesis and the Easterlin hypothesis.  相似文献   
253.
Bayesian Measures of the Minimum Detectable Concentration of an Immunoassay   总被引:1,自引:0,他引:1  
The minimum detectable concentration (MDC) is one of the most important properties of an assay. It is a statement about the smallest physical quantity an assay can reliably measure, and is used in assay design and quality control assessments. A plethora of measures of the MDC have been reported in a widely scattered literature. Many of these were developed at a time when accuracy and relevance had to be sacrificed for computational feasibility. This paper identifies limitations of existing measures and demonstrates how Bayesian inference may be used to overcome these limitations. Several new measures of the MDC are developed. These are conceptually simpler than existing measures, and are free of analytical approximations. The recent advances in Bayesian computation make them efficient to evaluate. A procedure developed in this paper measures the difference in the quality of two assays and shows that the new Bayesian measures perform better than existing measures.  相似文献   
254.
Summary. This work is motivated by data on daily travel-to-work flows observed between pairs of elemental territorial units of an Italian region. The data were collected during the 1991 population census. The aim of the analysis is to partition the region into local labour markets. We present a new method for this which is inspired by the Bayesian texture segmentation approach. We introduce a novel Markov random-field model for the distribution of the variables that label the local labour markets for each territorial unit. Inference is performed by means of Markov chain Monte Carlo methods. The issue of model hyperparameter estimation is also addressed. We compare the results with those obtained by applying a classical method. The methodology can be applied with minor modifications to other data sets.  相似文献   
255.
A class of log‐linear models, referred to as labelled graphical models (LGMs), is introduced for multinomial distributions. These models generalize graphical models (GMs) by employing partial conditional independence restrictions which are valid only in subsets of an outcome space. Theoretical results concerning model identifiability, decomposability and estimation are derived. A decision theoretical framework and a search algorithm for the identification of plausible models are described. Real data sets are used to illustrate that LGMs may provide a simpler interpretation of a dependence structure than GMs.  相似文献   
256.
This paper proposes a high dimensional factor multivariate stochastic volatility (MSV) model in which factor covariance matrices are driven by Wishart random processes. The framework allows for unrestricted specification of intertemporal sensitivities, which can capture the persistence in volatilities, kurtosis in returns, and correlation breakdowns and contagion effects in volatilities. The factor structure allows addressing high dimensional setups used in portfolio analysis and risk management, as well as modeling conditional means and conditional variances within the model framework. Owing to the complexity of the model, we perform inference using Markov chain Monte Carlo simulation from the posterior distribution. A simulation study is carried out to demonstrate the efficiency of the estimation algorithm. We illustrate our model on a data set that includes 88 individual equity returns and the two Fama-French size and value factors. With this application, we demonstrate the ability of the model to address high dimensional applications suitable for asset allocation, risk management, and asset pricing.  相似文献   
257.
The utilization of DNA evidence in cases of forensic identification has become widespread over the last few years. The strength of this evidence against an individual standing trial is typically presented in court in the form of a likelihood ratio (LR) or its reciprocal (the profile match probability). The value of this LR will vary according to the nature of the genetic relationship between the accused and other possible perpetrators of the crime in the population. This paper develops ideas and methods for analysing data and evaluating LRs when the evidence is based on short tandem repeat profiles, with special emphasis placed on a Bayesian approach. These are then applied in the context of a particular quadruplex profiling system used for routine case-work by the UK Forensic Science Service.  相似文献   
258.
Structural regression attempts to reveal an underlying relationship by compensating for errors in the variables. Ordinary least-squares regression has an entirely different purpose and provides a relationship between error-included variables. Structural model solutions, also known as the errors-in-variables and measurement-error solutions, use various inputs such as the error-variance ratio and x-error variance. This paper proposes that more accurate structural line gradient (coefficient) solutions will result from using the several solutions together as a system of equations. The known data scatter, as measured by the correlation coefficient, should always be used in choosing legitimate combinations of x- and y-error terms. However, this is difficult using equations. Chart solutions are presented to assist users to understand the structural regression process, to observe the correlation coefficient constraint, to assess the impact of their error estimates and, therefore, to provide better quality estimates of the structural regression gradient.  相似文献   
259.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   
260.
The author proposes to use weighted likelihood to approximate Bayesian inference when no external or prior information is available. He proposes a weighted likelihood estimator that minimizes the empirical Bayes risk under relative entropy loss. He discusses connections among the weighted likelihood, empirical Bayes and James‐Stein estimators. Both simulated and real data sets are used for illustration purposes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号