首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2385篇
  免费   106篇
  国内免费   9篇
管理学   266篇
人口学   11篇
丛书文集   11篇
理论方法论   32篇
综合类   65篇
社会学   24篇
统计学   2091篇
  2023年   35篇
  2022年   27篇
  2021年   39篇
  2020年   38篇
  2019年   95篇
  2018年   113篇
  2017年   200篇
  2016年   90篇
  2015年   84篇
  2014年   118篇
  2013年   536篇
  2012年   201篇
  2011年   85篇
  2010年   84篇
  2009年   102篇
  2008年   74篇
  2007年   78篇
  2006年   73篇
  2005年   58篇
  2004年   58篇
  2003年   38篇
  2002年   33篇
  2001年   29篇
  2000年   35篇
  1999年   26篇
  1998年   28篇
  1997年   21篇
  1996年   9篇
  1995年   8篇
  1994年   14篇
  1993年   7篇
  1992年   11篇
  1991年   13篇
  1990年   3篇
  1989年   5篇
  1988年   7篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1975年   1篇
排序方式: 共有2500条查询结果,搜索用时 185 毫秒
71.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   
72.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study.  相似文献   
73.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   
74.
Small area estimation (SAE) concerns with how to reliably estimate population quantities of interest when some areas or domains have very limited samples. This is an important issue in large population surveys, because the geographical areas or groups with only small samples or even no samples are often of interest to researchers and policy-makers. For example, large population health surveys, such as Behavioural Risk Factor Surveillance System and Ohio Mecaid Assessment Survey (OMAS), are regularly conducted for monitoring insurance coverage and healthcare utilization. Classic approaches usually provide accurate estimators at the state level or large geographical region level, but they fail to provide reliable estimators for many rural counties where the samples are sparse. Moreover, a systematic evaluation of the performances of the SAE methods in real-world setting is lacking in the literature. In this paper, we propose a Bayesian hierarchical model with constraints on the parameter space and show that it provides superior estimators for county-level adult uninsured rates in Ohio based on the 2012 OMAS data. Furthermore, we perform extensive simulation studies to compare our methods with a collection of common SAE strategies, including direct estimators, synthetic estimators, composite estimators, and Datta GS, Ghosh M, Steorts R, Maples J.'s [Bayesian benchmarking with applications to small area estimation. Test 2011;20(3):574–588] Bayesian hierarchical model-based estimators. To set a fair basis for comparison, we generate our simulation data with characteristics mimicking the real OMAS data, so that neither model-based nor design-based strategies use the true model specification. The estimators based on our proposed model are shown to outperform other estimators for small areas in both simulation study and real data analysis.  相似文献   
75.
Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables.  相似文献   
76.
The Wisconsin Epidemiologic Study of Diabetic Retinopathy is a population-based epidemiological study carried out in Southern Wisconsin during the 1980s. The resulting data were analysed by different statisticians and ophthalmologists during the last two decades. Most of the analyses were carried out on the baseline data, although there were two follow-up studies on the same population. A Bayesian analysis of the first follow-up data, taken four years after the baseline study, was carried out by Angers and Biswas [Angers, J.-F. and Biswas, A., 2004, A Bayesian analysis of the four-year follow-up data of theWisconsin epidemiologic study of diabetic retinopathy. Statistics in Medicine, 23, 601–615.], where the choice of the best model in terms of the covariate inclusion is done, and estimates of the associated covariate effects were obtained using the baseline data to set the prior for the parameters. In the present article we consider an univariate transformation of the bivariate ordinal data, and a parallel analysis with the much simpler univariate data is carried out. The results are then compared with the results of Angers and Biswas (2004). In conclusion, our analyses suggest that the univariate analysis fails to detect features of the data found by the bivariate analysis. Even an univariate transformation of our data with quite high correlation with both left and right eyes is inadequate.  相似文献   
77.
The main purpose of dose‐escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose‐escalation designs that incorporate both the dose‐limiting events and dose‐limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose‐escalation strategies. The first type of procedures, called “single objective,” aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called “dual objective,” aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose‐escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual‐objective designs give better results in terms of identifying the 2 real target doses compared to the single‐objective designs.  相似文献   
78.
Frailty models are used in the survival analysis to account for the unobserved heterogeneity in the individual risks to disease and death. To analyze the bivariate data on related survival times (e.g., matched pairs experiments, twin or family data), the shared frailty models were suggested. In this article, we introduce the shared gamma frailty models with the reversed hazard rate. We develop the Bayesian estimation procedure using the Markov chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. We apply the model to a real life bivariate survival dataset.  相似文献   
79.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   
80.
This paper reviews difficulties with the interpretation and use of the prior parameter u required in the Dirichlet approach to nonpararnetric Bayesian statistics. Two subjective prior distributions are introduced and studied. These priors are obtained computationally by requiring that the experimenter specify certain constraints.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号