首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   523篇
  免费   6篇
管理学   41篇
民族学   6篇
人口学   33篇
丛书文集   1篇
理论方法论   24篇
综合类   3篇
社会学   105篇
统计学   316篇
  2023年   5篇
  2022年   5篇
  2021年   8篇
  2020年   20篇
  2019年   20篇
  2018年   29篇
  2017年   44篇
  2016年   21篇
  2015年   8篇
  2014年   16篇
  2013年   151篇
  2012年   13篇
  2011年   4篇
  2010年   16篇
  2009年   15篇
  2008年   9篇
  2007年   6篇
  2006年   4篇
  2005年   10篇
  2004年   5篇
  2003年   5篇
  2002年   11篇
  2001年   7篇
  2000年   5篇
  1999年   11篇
  1998年   4篇
  1997年   4篇
  1996年   6篇
  1995年   4篇
  1992年   4篇
  1991年   6篇
  1990年   1篇
  1989年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1984年   3篇
  1983年   2篇
  1982年   10篇
  1981年   3篇
  1980年   6篇
  1979年   3篇
  1978年   4篇
  1977年   4篇
  1976年   2篇
  1975年   1篇
  1973年   1篇
  1972年   2篇
  1969年   1篇
  1965年   1篇
排序方式: 共有529条查询结果,搜索用时 10 毫秒
491.
The design of this paper is to examine the problem of estimation of finite population product in mail surveys for the current occasion in the context of sampling on two occasions is attempted when there is non-response on both the occasions. Estimators for the current occasion are derived as a particular case when there is non-response on first and second occasion only. The gain in efficiency of the proposed estimate over the direct estimate using no information gathered on the first occasion is computed. The proposed strategy has been compared with other estimators. An empirical study is carried out to study the performance of the proposed strategy.  相似文献   
492.
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.?the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.  相似文献   
493.
This paper presents a method of estimation of crop-production statistics at smaller geographical levels like a community development block (generally referred to as a block) to make area-specific plans for agricultural development programmes in India. Using available district-level data on crop yield from crop-cutting experiments and data on auxiliary variables from various administrative sources, a suitable regression model is fitted. The fitted model is then used to predict the crop production at the block level. Some scaled estimators are also developed using predicted estimates. An empirical study is also carried out to judge the merits of the proposed estimators.  相似文献   
494.
We re-examine the criteria of “hyper-admissibility” and “necessary bestness”, for the choice of estimator, from the point of view of their relevance to the design of actual surveys. Both these criteria give rise to a unique choice of estimator (viz. the Horvitz-Thompson estimator ?HT) whatever be the character under investigation or sample design. However, we show here that the “principal hyper-surfaces” (or “domains”) of dimension one (which are practically uninteresting)play the key role in arriving at the unique choice. A variance estimator v1(?HT) (due to Horvitz-Thompson), which takes negative values “often”, is shown to be uniquely “hyperadmissible” in a wide class of unbiased estimators of the variance of ?HT. Extensive empirical evidence on the superiority of the Sen-Yates-Grundy variance estimator v2(?HT) over v1(?HT) is presented.  相似文献   
495.
Balanced Confidence Regions Based on Tukey's Depth and the Bootstrap   总被引:1,自引:0,他引:1  
We propose and study the bootstrap confidence regions for multivariate parameters based on Tukey's depth. The bootstrap is based on the normalized or Studentized statistic formed from an independent and identically distributed random sample obtained from some unknown distribution in R q . The bootstrap points are deleted on the basis of Tukey's depth until the desired confidence level is reached. The proposed confidence regions are shown to be second order balanced in the context discussed by Beran. We also study the asymptotic consistency of Tukey's depth-based bootstrap confidence regions. The applicability of the method proposed is demonstrated in a simulation study.  相似文献   
496.
This paper considers the problem of estimating a cumulative distribution function (cdf), when it is known a priori to dominate a known cdf. The estimator considered is obtained by adjusting the empirical cdf using the prior information. This adjusted estimator is shown to be consistent, its limiting distribution is found, and its mean squared error (MSE) is shown to be smaller than the MSE of the empirical cdf. Its asymptotic efficiency (compared to the empirical cdf) is also found.  相似文献   
497.
The present study utilizes an operational model as well as simple empirical relationships for estimating hazard zones due to fire, explosion, and toxic vapor cloud dispersion. The empirical relationships are based on giving appropriate weightage to each of the parameters on which the hazard in question (viz, fire, explosion, toxic vapour dispersion) is dependent. Results from these two approaches [i.e., an operational model FLAMCALC of U.K. Health and Safety Executive (HSE) and an empirical model named FIREX] have been compared with the data obtained from the Mexico City disaster in 1984. In general, results from the empirical approach and FLAMCALC are comparable to the observed effects.  相似文献   
498.
Heavy gas dispersion models have been developed at IIT (hereinafter referred as IIT heavy gas models I and II) with a view to estimate vulnerable zones due to accidental (both instantaneous and continuous, respectively) release of dense toxic material in the atmosphere. The results obtained from IIT heavy gas models have been compared with those obtained from the DEGADIS model [Dense Gas Dispersion Model, developed by Havens and Spicer (1985) for the U.S. Coast Guard] as well as with the observed data collected during the Burro Series, Maplin Sands, and Thorney Island field trials. Both of these models include relevant features of dense gas dispersion, viz., gravity slumping, air entrainment, cloud heating, and transition to the passive phase, etc. The DEGADIS model has been considered for comparing the performance of IIT heavy gas models in this study because it incorporates most of the physical processes of dense gas dispersion in an elaborate manner, and has also been satisfactorily tested against field observations. The predictions from IIT heavy gas models indicate a fairly similar trend to the observed values from Thorney Island, Burro Series, and Maplin experiments with a tendency toward overprediction. There is a good agreement between the prediction of IIT Heavy Gas models I and II with those from DEGADIS, except for the simulations of IIT heavy gas model-I pertaining to very large release quantities under highly stable atmospheric conditions. In summary, the performance of IIT heavy gas models have been found to be reasonably good both with respect to the limited field data available and various simulations (selected on the basis of relevant storages in the industries and prevalent meteorological conditions performed with DEGADIS). However, there is a scope of improvement in the IIT heavy gas models (viz., better formulation for entrainment, modification of coefficients, transition criteria, etc.). Further, isotons (nomograms) have been prepared by using IIT heavy gas models for chlorine, which provide safe distance for various storage amounts for 24 meteorological scenarios prevalent in the entire year. These nomograms are prepared such that a nonspecialist can use them easily for control and management in case of an emergency requiring the evacuation of people in the affected region. These results can also be useful for siting and limiting the storage quantities.  相似文献   
499.
A non-normal class of distribution (Edgeworth Series distribution) function in three and four parameters has been considered for dose-binary response relationship. This class accounts for the non-normality (expressed in terms of skewness and kurtosis) present in the relationship in addition to the usual location and scale parameters (generally considered by two parameter models). We present the maximum likelihood method of estimation of the parameters and test of probit (normal distribution) hypothesis. Edgeworth Series distribution when fitted to the data of Milicer & Szczotka (1966) showed an excellent closeness to the observed values, significant improvement over probit and logit fit (Aranda-Ordaz, 1981), and better fit compared to Prentice (1976) model.  相似文献   
500.
Inferences for survival curves based on right censored continuous or grouped data are studied. Testing homogeneity with an ordered restricted alternative and testing the order restriction as the null hypothesis are considered. Under a proportional hazards model, the ordering on the survival curves corresponds to an ordering on the regression coefficients. Approximate likelihood methods are obtained by applying order restricted procedures to the estimates of the regression coefficients. Ordered analogues to the log rank test which are based on the score statistics are considered also. Chi-bar-squared distributions, which have been studied extensively, are shown to provide reasonable approximations to the null distributions of these tests statistics. Using Monte Carlo techniques, the powers of these two types of tests are compared with those that are available in the literature.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号