首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   531篇
  免费   13篇
  国内免费   1篇
管理学   19篇
民族学   1篇
人口学   4篇
丛书文集   11篇
理论方法论   6篇
综合类   89篇
社会学   2篇
统计学   413篇
  2022年   2篇
  2020年   1篇
  2019年   6篇
  2018年   15篇
  2017年   26篇
  2016年   8篇
  2015年   11篇
  2014年   17篇
  2013年   186篇
  2012年   35篇
  2011年   16篇
  2010年   19篇
  2009年   14篇
  2008年   14篇
  2007年   26篇
  2006年   10篇
  2005年   14篇
  2004年   16篇
  2003年   10篇
  2002年   11篇
  2001年   10篇
  2000年   14篇
  1999年   19篇
  1998年   9篇
  1997年   5篇
  1996年   6篇
  1995年   4篇
  1994年   3篇
  1993年   1篇
  1992年   3篇
  1990年   3篇
  1989年   2篇
  1986年   1篇
  1985年   2篇
  1983年   1篇
  1982年   1篇
  1980年   1篇
  1976年   1篇
  1975年   2篇
排序方式: 共有545条查询结果,搜索用时 15 毫秒
21.
Small area estimation (SAE) concerns with how to reliably estimate population quantities of interest when some areas or domains have very limited samples. This is an important issue in large population surveys, because the geographical areas or groups with only small samples or even no samples are often of interest to researchers and policy-makers. For example, large population health surveys, such as Behavioural Risk Factor Surveillance System and Ohio Mecaid Assessment Survey (OMAS), are regularly conducted for monitoring insurance coverage and healthcare utilization. Classic approaches usually provide accurate estimators at the state level or large geographical region level, but they fail to provide reliable estimators for many rural counties where the samples are sparse. Moreover, a systematic evaluation of the performances of the SAE methods in real-world setting is lacking in the literature. In this paper, we propose a Bayesian hierarchical model with constraints on the parameter space and show that it provides superior estimators for county-level adult uninsured rates in Ohio based on the 2012 OMAS data. Furthermore, we perform extensive simulation studies to compare our methods with a collection of common SAE strategies, including direct estimators, synthetic estimators, composite estimators, and Datta GS, Ghosh M, Steorts R, Maples J.'s [Bayesian benchmarking with applications to small area estimation. Test 2011;20(3):574–588] Bayesian hierarchical model-based estimators. To set a fair basis for comparison, we generate our simulation data with characteristics mimicking the real OMAS data, so that neither model-based nor design-based strategies use the true model specification. The estimators based on our proposed model are shown to outperform other estimators for small areas in both simulation study and real data analysis.  相似文献   
22.
In quantitative trait linkage studies using experimental crosses, the conventional normal location-shift model or other parameterizations may be unnecessarily restrictive. We generalize the mapping problem to a genuine nonparametric setup and provide a robust estimation procedure for the situation where the underlying phenotype distributions are completely unspecified. Classical Wilcoxon–Mann–Whitney statistics are employed for point and interval estimation of QTL positions and effects.  相似文献   
23.
This article considers the problem of estimating the parameters of Weibull distribution under progressive Type-I interval censoring scheme with beta-binomial removals. Classical as well as the Bayesian procedures for the estimation of unknown model parameters have been developed. The Bayes estimators are obtained under SELF and GELF using MCMC technique. The performance of the estimators, has been discussed in terms of their MSEs. Further, expression for the expected number of total failures has been obtained. A real dataset of the survival times for patients with plasma cell myeloma is used to illustrate the suitability of the proposed methodology.  相似文献   
24.
Statistical inferences for the geometric process (GP) are derived when the distribution of the first occurrence time is assumed to be inverse Gaussian (IG). An α-series process, as a possible alternative to the GP, is introduced since the GP is sometimes inappropriate to apply some reliability and scheduling problems. In this study, statistical inference problem for the α-series process is considered where the distribution of first occurrence time is IG. The estimators of the parameters α, μ, and σ2 are obtained by using the maximum likelihood (ML) method. Asymptotic distributions and consistency properties of the ML estimators are derived. In order to compare the efficiencies of the ML estimators with the widely used nonparametric modified moment (MM) estimators, Monte Carlo simulations are performed. The results showed that the ML estimators are more efficient than the MM estimators. Moreover, two real life datasets are given for application purposes.  相似文献   
25.
In this paper, multiple criteria sorting methods based on data envelopment analysis (DEA) are developed to evaluate research and development (R&D) projects. The weight intervals of the criteria are obtained from Interval Analytic Hierarchy Process and employed as the assurance region constraints of models. Based on data envelopment analysis, two threshold estimation models, and five assignment models are developed for sorting. In addition to sorting, these models also provide ranking of the projects. The developed approach and the well-known sorting method UTADIS are applied to a real case study to analyze the R&D projects proposed to a grant program executed by a government funding agency in 2009. A five level R&D project selection criteria hierarchy and an assisting point allocation guide are defined to measure and quantify the performance of the projects. In the case study, the developed methods are observed to be more stable than UTADIS.  相似文献   
26.
In recent years, the issue of water allocation among competing users has been of great concern for many countries due to increasing water demand from population growth and economic development. In water management systems, the inherent uncertainties and their potential interactions pose a significant challenge for water managers to identify optimal water-allocation schemes in a complex and uncertain environment. This paper thus proposes a methodology that incorporates optimization techniques and statistical experimental designs within a general framework to address the issues of uncertainty and risk as well as their correlations in a systematic manner. A water resources management problem is used to demonstrate the applicability of the proposed methodology. The results indicate that interval solutions can be generated for the objective function and decision variables, and a number of decision alternatives can be obtained under different policy scenarios. The solutions with different risk levels of constraint violation can help quantify the relationship between the economic objective and the system risk, which is meaningful for supporting risk management. The experimental data obtained from the Taguchi's orthogonal array design are useful for identifying the significant factors affecting the means of total net benefits. Then the findings from the mixed-level factorial experiment can help reveal the latent interactions between those significant factors at different levels and their effects on the modeling response.  相似文献   
27.
A method for obtaining prediction intervals for an outcome of a future experiment is presented. The method uses hypothesis testing as a tool to derive prediction intervals and assumes that the probability distributions of informative and future experiments are one parameter exponential families. Asymptotic similar mean coverage prediction intervals are derived using the score test as a test statistics. Examples are presented and asymptotic prediction limits are compared with the prediction limits given in the literature.  相似文献   
28.
The current financial turbulence in Europe inspires and perhaps requires researchers to rethink how to measure incomes, wealth, and other parameters of interest to policy-makers and others. The noticeable increase in disparities between less and more fortunate individuals suggests that measures based upon comparing the incomes of less fortunate with the mean of the entire population may not be adequate. The classical Gini and related indices of economic inequality, however, are based exactly on such comparisons. It is because of this reason that in this paper we explore and contrast the classical Gini index with a new Zenga index, the latter being based on comparisons of the means of less and more fortunate sub-populations, irrespectively of the threshold that might be used to delineate the two sub-populations. The empirical part of the paper is based on the 2001 wave of the European Community Household Panel data set provided by EuroStat. Even though sample sizes appear to be large, we supplement the estimated Gini and Zenga indices with measures of variability in the form of normal, t-bootstrap, and bootstrap bias-corrected and accelerated confidence intervals.  相似文献   
29.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes.  相似文献   
30.
We consider the following fragmentation model of the unit interval 𝕀. We start fragmenting 𝕀 into two pieces with uniform random sizes. One of these two subintervals is then chosen at random according to a β-size-biased picking procedure, β ∈ ?. This particular tagged fragment is next broken into two random pieces, one of which is chosen accordingly at random for further splitting; the process is then iterated independently. This model constitutes a recursive β-partitioning procedure where splitting occurs only at one of the two fragments formed in the previous step of fragmentation. We investigate some statistical features of this model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号