首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   4篇
管理学   15篇
人口学   3篇
丛书文集   2篇
理论方法论   3篇
综合类   29篇
社会学   6篇
统计学   138篇
  2023年   1篇
  2022年   1篇
  2020年   2篇
  2019年   7篇
  2018年   5篇
  2017年   5篇
  2016年   5篇
  2015年   7篇
  2014年   5篇
  2013年   56篇
  2012年   12篇
  2011年   5篇
  2009年   11篇
  2008年   9篇
  2007年   3篇
  2006年   7篇
  2005年   3篇
  2004年   4篇
  2003年   4篇
  2002年   3篇
  2001年   6篇
  2000年   4篇
  1998年   5篇
  1997年   2篇
  1996年   5篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   3篇
  1989年   1篇
  1988年   2篇
  1987年   1篇
  1985年   1篇
  1984年   2篇
  1979年   1篇
  1976年   1篇
排序方式: 共有196条查询结果,搜索用时 31 毫秒
1.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants.  相似文献   
2.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   
3.
Principal curves revisited   总被引:15,自引:0,他引:15  
A principal curve (Hastie and Stuetzle, 1989) is a smooth curve passing through the middle of a distribution or data cloud, and is a generalization of linear principal components. We give an alternative definition of a principal curve, based on a mixture model. Estimation is carried out through an EM algorithm. Some comparisons are made to the Hastie-Stuetzle definition.  相似文献   
4.
Wavelet thresholding of spectra has to be handled with care when the spectra are the predictors of a regression problem. Indeed, a blind thresholding of the signal followed by a regression method often leads to deteriorated predictions. The scope of this article is to show that sparse regression methods, applied in the wavelet domain, perform an automatic thresholding: the most relevant wavelet coefficients are selected to optimize the prediction of a given target of interest. This approach can be seen as a joint thresholding designed for a predictive purpose. The method is illustrated on a real world problem where metabolomic data are linked to poison ingestion. This example proves the usefulness of wavelet expansion and the good behavior of sparse and regularized methods. A comparison study is performed between the two-steps approach (wavelet thresholding and regression) and the one-step approach (selection of wavelet coefficients with a sparse regression). The comparison includes two types of wavelet bases, various thresholding methods, and various regression methods and is evaluated by calculating prediction performances. Information about the location of the most important features on the spectra was also obtained and used to identify the most relevant metabolites involved in the mice poisoning.  相似文献   
5.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.  相似文献   
6.
The depths, which have been used to detect outliers or to extract a representative subset, can be applied to classification. We propose a resampling-based classification method based on the fact that resampling techniques yield a consistent estimator of the distribution of a statistic. The performance of this method was evaluated with eight contaminated models in terms of Correct Classification Rates (CCRs), and the results were compared with other known methods. The proposed method consistently showed higher average CCRs and 4% higher CCR at the maximum compared to other methods. In addition, this method was applied to Berkeley data. The average CCRs were between 0.79 and 0.85.  相似文献   
7.

When analyzing categorical data using loglinear models in sparse contingency tables, asymptotic results may fail. In this paper the empirical properties of three commonly used asymptotic tests of independence, based on the uniform association model for ordinal data, are investigated by means of Monte Carlo simulation. Five different bootstrapped tests of independence are presented and compared to the asymptotic tests. The comparisons are made with respect to both size and power properties of the tests. Results indicate that the asymptotic tests have poor size control. The test based on the estimated association parameter is severely conservative and the two chi-squared tests (Pearson, likelihood-ratio) are both liberal. The bootstrap tests that either use a parametric assumption or are based on non-pivotal test statistics do not perform better than the asymptotic tests in all situations. The bootstrap tests that are based on approximately pivotal statistics provide both adjustment of size and enhancement of power. These tests are therefore recommended for use in situations similar to those included in the simulation study.  相似文献   
8.
The paper proposes a new test for detecting the umbrella pattern under a general non‐parametric scheme. The alternative asserts that the umbrella ordering holds while the hypothesis is its complement. The main focus is put on controlling the power function of the test outside the alternative. As a result, the asymptotic error of the first kind of the constructed solution is smaller than or equal to the fixed significance level α on the whole set where the umbrella ordering does not hold. Also, under finite sample sizes, this error is controlled to a satisfactory extent. A simulation study shows, among other things, that the new test improves upon the solution widely recommended in the literature of the subject. A routine, written in R, is attached as the Supporting Information file.  相似文献   
9.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low.  相似文献   
10.
传统方法解决大规模时序曲线的预测建模问题,需要对每条曲线逐一建模,使得建模工作量相当庞大,在实际应用中缺乏可操作性。文章提出一种解决此问题的新方法——曲线分类建模方法。该方法先减少曲线的模型种类,再进行曲线分类和分类建模,在尽可能保留原始信息的前提下较大程度地降低了建模的工作量。文章阐述了该方法的原理和计算过程,并以应用于多地区GDP曲线的预测案例说明该方法的实用性和有效性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号