首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   251篇
  免费   5篇
管理学   16篇
人口学   3篇
丛书文集   5篇
理论方法论   3篇
综合类   33篇
社会学   11篇
统计学   185篇
  2023年   2篇
  2022年   2篇
  2020年   4篇
  2019年   11篇
  2018年   8篇
  2017年   8篇
  2016年   9篇
  2015年   8篇
  2014年   11篇
  2013年   62篇
  2012年   21篇
  2011年   13篇
  2010年   7篇
  2009年   16篇
  2008年   7篇
  2007年   6篇
  2006年   6篇
  2005年   4篇
  2004年   4篇
  2003年   5篇
  2002年   3篇
  2001年   4篇
  2000年   4篇
  1998年   5篇
  1997年   2篇
  1996年   5篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   3篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1985年   1篇
  1984年   2篇
  1979年   1篇
  1976年   1篇
排序方式: 共有256条查询结果,搜索用时 109 毫秒
1.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
2.
Principal curves revisited   总被引:15,自引:0,他引:15  
A principal curve (Hastie and Stuetzle, 1989) is a smooth curve passing through the middle of a distribution or data cloud, and is a generalization of linear principal components. We give an alternative definition of a principal curve, based on a mixture model. Estimation is carried out through an EM algorithm. Some comparisons are made to the Hastie-Stuetzle definition.  相似文献   
3.
The depths, which have been used to detect outliers or to extract a representative subset, can be applied to classification. We propose a resampling-based classification method based on the fact that resampling techniques yield a consistent estimator of the distribution of a statistic. The performance of this method was evaluated with eight contaminated models in terms of Correct Classification Rates (CCRs), and the results were compared with other known methods. The proposed method consistently showed higher average CCRs and 4% higher CCR at the maximum compared to other methods. In addition, this method was applied to Berkeley data. The average CCRs were between 0.79 and 0.85.  相似文献   
4.
将1930年代左翼文学潮流的兴起置于"民国"这一社会历史的政治经济研究框架中,可以看到其与民国时期商业出版的经济环境有着十分密切的关系。从此视角出发并结合相关史料,兼以良友图书印刷公司的转变为实例,探寻在与出版业广泛互动的条件下,一度风行文坛进而成为声势浩大"主潮"的左翼文学潮流的生成机制。  相似文献   
5.
The receiver operating characteristic (ROC) curve is one of the most commonly used methods to compare the diagnostic performance of two or more laboratory or diagnostic tests. In this paper, we propose semi-empirical likelihood based confidence intervals for ROC curves of two populations, where one population is parametric and the other one is non-parametric and both have missing data. After imputing missing values, we derive the semi-empirical likelihood ratio statistic and the corresponding likelihood equations. It is shown that the log-semi-empirical likelihood ratio statistic is asymptotically scaled chi-squared. The estimating equations are solved simultaneously to obtain the estimated lower and upper bounds of semi-empirical likelihood confidence intervals. We conduct extensive simulation studies to evaluate the finite sample performance of the proposed empirical likelihood confidence intervals with various sample sizes and different missing probabilities.  相似文献   
6.
The paper proposes a new test for detecting the umbrella pattern under a general non‐parametric scheme. The alternative asserts that the umbrella ordering holds while the hypothesis is its complement. The main focus is put on controlling the power function of the test outside the alternative. As a result, the asymptotic error of the first kind of the constructed solution is smaller than or equal to the fixed significance level α on the whole set where the umbrella ordering does not hold. Also, under finite sample sizes, this error is controlled to a satisfactory extent. A simulation study shows, among other things, that the new test improves upon the solution widely recommended in the literature of the subject. A routine, written in R, is attached as the Supporting Information file.  相似文献   
7.
The nonparametric two-sample bootstrap is applied to computing uncertainties of measures in receiver operating characteristic (ROC) analysis on large datasets in areas such as biometrics, speaker recognition, etc. when the analytical method cannot be used. Its validation was studied by computing the standard errors of the area under ROC curve using the well-established analytical Mann–Whitney statistic method and also using the bootstrap. The analytical result is unique. The bootstrap results are expressed as a probability distribution due to its stochastic nature. The comparisons were carried out using relative errors and hypothesis testing. These match very well. This validation provides a sound foundation for such computations.  相似文献   
8.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low.  相似文献   
9.
传统方法解决大规模时序曲线的预测建模问题,需要对每条曲线逐一建模,使得建模工作量相当庞大,在实际应用中缺乏可操作性。文章提出一种解决此问题的新方法——曲线分类建模方法。该方法先减少曲线的模型种类,再进行曲线分类和分类建模,在尽可能保留原始信息的前提下较大程度地降低了建模的工作量。文章阐述了该方法的原理和计算过程,并以应用于多地区GDP曲线的预测案例说明该方法的实用性和有效性。  相似文献   
10.
Graphical representation of survival curves is often used to illustrate associations between exposures and time-to-event outcomes. However, when exposures are time-dependent, calculation of survival probabilities is not straightforward. Our aim was to develop a method to estimate time-dependent survival probabilities and represent them graphically. Cox models with time-dependent indicators to represent state changes were fitted, and survival probabilities were plotted using pre-specified times of state changes. Time-varying hazard ratios for the state change were also explored. The method was applied to data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL). Survival curves showing a ‘split’ at a pre-specified time t allow for the qualitative comparison of survival probabilities between patients with similar baseline covariates who do and do not experience a state change at time t. Time since state change interactions can be visually represented to reflect changing hazard ratios over time. A2ALL study results showed differences in survival probabilities among those who did not receive a transplant, received a living donor transplant, and received a deceased donor transplant. These graphical representations of survival curves with time-dependent indicators improve upon previous methods and allow for clinically meaningful interpretation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号