首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1678篇
  免费   49篇
  国内免费   5篇
管理学   135篇
人口学   1篇
丛书文集   21篇
理论方法论   12篇
综合类   180篇
社会学   23篇
统计学   1360篇
  2024年   1篇
  2023年   18篇
  2022年   23篇
  2021年   21篇
  2020年   27篇
  2019年   55篇
  2018年   77篇
  2017年   102篇
  2016年   61篇
  2015年   35篇
  2014年   52篇
  2013年   349篇
  2012年   126篇
  2011年   64篇
  2010年   53篇
  2009年   57篇
  2008年   59篇
  2007年   58篇
  2006年   52篇
  2005年   59篇
  2004年   52篇
  2003年   45篇
  2002年   39篇
  2001年   37篇
  2000年   33篇
  1999年   22篇
  1998年   23篇
  1997年   22篇
  1996年   13篇
  1995年   16篇
  1994年   8篇
  1993年   9篇
  1992年   8篇
  1991年   10篇
  1990年   5篇
  1989年   2篇
  1988年   5篇
  1987年   7篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   6篇
  1981年   3篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
排序方式: 共有1732条查询结果,搜索用时 250 毫秒
41.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   
42.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors.  相似文献   
43.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study.  相似文献   
44.
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial.  相似文献   
45.
This paper is concerned with interval estimation for the breakpoint parameter in segmented regression. We present score‐type confidence intervals derived from the score statistic itself and from the recently proposed gradient statistic. Due to lack of regularity conditions of the score, non‐smoothness and non‐monotonicity, naive application of the score‐based statistics is unfeasible and we propose to exploit the smoothed score obtained via induced smoothing. We compare our proposals with the traditional methods based on the Wald and the likelihood ratio statistics via simulations and an analysis of a real dataset: results show that the smoothed score‐like statistics perform in practice somewhat better than competitors, even when the model is not correctly specified.  相似文献   
46.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   
47.
In a recent paper, Hampel (1985) studied the properties of rejection-plus-mean procedures as estimators of a location parameter. He reported that these procedures have low breakdown and high variance. In this article it is pointed out that these results are due to the outliers being rejected in a forwards-stepping manner, and when a more appropriate backwards-stepping approach is used, rejection-plus-mean procedures lead to estimators with high breakdown and high variance. In this article it is pointed out that these results are due to the outliers being rejected in a forwards-stepping manner, and when a more appropriate backwards-stepping approach is used, rejection-plus-mean procedures lead to estimator with high breakdown and redescending theoretical influence function.  相似文献   
48.
This paper reviews difficulties with the interpretation and use of the prior parameter u required in the Dirichlet approach to nonpararnetric Bayesian statistics. Two subjective prior distributions are introduced and studied. These priors are obtained computationally by requiring that the experimenter specify certain constraints.  相似文献   
49.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   
50.
在经典报童模型下考虑供应和需求不确定性,研究了具有风险厌恶的零售商库存优化问题。采用条件风险值(CVaR)对库存绩效进行度量,构建了基于CVaR的零售商库存运作模型;在此基础上,考虑上游供应商供货能力和下游市场需求不确定性,并采用一系列未知概率的离散情景进行描述,给出了供需不确定条件下基于CVaR的零售商库存鲁棒优化模型。进一步,采用区间不确定集对未知情景概率进行建模,给出了基于最大最小准则的鲁棒对应模型。针对同时考虑供需不确定性导致的模型非凸性,采用标准对偶理论将其转化为易于求解的数学规划问题。最后,通过数值计算分析了不同风险厌恶程度和不确定性程度对零售商库存决策以及库存绩效的影响。结果表明,供需不确定性的存在虽然会导致零售商库存绩效损失,但损失值较小。特别地,依据文中模型得到的鲁棒库存策略在多数情况下能够保证零售商获得更优的库存绩效。此外,不确定性和风险厌恶程度的增加虽然会影响零售商库存决策和运作绩效,但在同等风险厌恶态度下,随着不确定性程度的增加,基于文中方法得到的鲁棒库存策略仍能确保零售商获得理想的库存绩效,表明文中所建模型在应对供需不确定性方面具有良好的鲁棒性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号