全文获取类型
收费全文 | 2023篇 |
免费 | 59篇 |
国内免费 | 5篇 |
专业分类
管理学 | 140篇 |
人口学 | 7篇 |
丛书文集 | 25篇 |
理论方法论 | 11篇 |
综合类 | 210篇 |
社会学 | 25篇 |
统计学 | 1669篇 |
出版年
2023年 | 21篇 |
2022年 | 17篇 |
2021年 | 21篇 |
2020年 | 39篇 |
2019年 | 87篇 |
2018年 | 83篇 |
2017年 | 114篇 |
2016年 | 68篇 |
2015年 | 37篇 |
2014年 | 67篇 |
2013年 | 437篇 |
2012年 | 156篇 |
2011年 | 71篇 |
2010年 | 57篇 |
2009年 | 68篇 |
2008年 | 60篇 |
2007年 | 73篇 |
2006年 | 60篇 |
2005年 | 74篇 |
2004年 | 58篇 |
2003年 | 53篇 |
2002年 | 48篇 |
2001年 | 46篇 |
2000年 | 45篇 |
1999年 | 27篇 |
1998年 | 26篇 |
1997年 | 32篇 |
1996年 | 17篇 |
1995年 | 18篇 |
1994年 | 12篇 |
1993年 | 12篇 |
1992年 | 15篇 |
1991年 | 11篇 |
1990年 | 6篇 |
1989年 | 2篇 |
1988年 | 7篇 |
1987年 | 8篇 |
1986年 | 5篇 |
1985年 | 3篇 |
1984年 | 3篇 |
1983年 | 6篇 |
1982年 | 8篇 |
1981年 | 4篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 2篇 |
排序方式: 共有2087条查询结果,搜索用时 15 毫秒
51.
52.
53.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks. 相似文献
54.
William Jay Conover Armando Jesús Guerrero-Serrano 《Journal of Statistical Computation and Simulation》2018,88(8):1454-1469
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application. 相似文献
55.
Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation 总被引:1,自引:0,他引:1
《Journal of Statistical Computation and Simulation》2012,82(1):195-213
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data. 相似文献
56.
《Journal of Statistical Computation and Simulation》2012,82(3-4):259-267
The robustness of an extended version of Colton's decision theoretic model is considered. The extended version includes the losses due to the patients who are not entered in the experiment, but require treatment while the experiment is in progress. Among the topics considered are the effects of risk of using a sample size considerably less than the optimum, use of an incorrect patient horizon, application of a modified loss function, and use of a two point prior distribution. It is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density. 相似文献
57.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors. 相似文献
58.
Jelani Wiltshire Fred W. Huffer William C. Parker 《Journal of applied statistics》2014,41(9):2028-2043
Van Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon age and the rate of extinction. We propose a general class of test statistics that can be used to test for the effect of age on the rate of extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead we control for covariate effects by pairing or grouping together similar species. Simulations are used to compare the power of the statistics. We apply the test statistics to data on Foram extinctions and find that age has a positive effect on the rate of extinction. A derivation of the null distribution of one of the test statistics is provided in the supplementary material. 相似文献
59.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study. 相似文献
60.
Björn Bornkamp David Ohlssen Baldur P. Magnusson Heinz Schmidli 《Pharmaceutical statistics》2017,16(2):133-142
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial. 相似文献