全文获取类型
收费全文 | 834篇 |
免费 | 12篇 |
国内免费 | 1篇 |
专业分类
管理学 | 24篇 |
人口学 | 3篇 |
丛书文集 | 3篇 |
理论方法论 | 2篇 |
综合类 | 29篇 |
社会学 | 2篇 |
统计学 | 784篇 |
出版年
2023年 | 6篇 |
2022年 | 6篇 |
2021年 | 4篇 |
2020年 | 15篇 |
2019年 | 23篇 |
2018年 | 21篇 |
2017年 | 58篇 |
2016年 | 12篇 |
2015年 | 15篇 |
2014年 | 19篇 |
2013年 | 317篇 |
2012年 | 81篇 |
2011年 | 12篇 |
2010年 | 14篇 |
2009年 | 29篇 |
2008年 | 19篇 |
2007年 | 24篇 |
2006年 | 10篇 |
2005年 | 19篇 |
2004年 | 14篇 |
2003年 | 5篇 |
2002年 | 14篇 |
2001年 | 10篇 |
2000年 | 10篇 |
1999年 | 7篇 |
1998年 | 8篇 |
1997年 | 7篇 |
1996年 | 3篇 |
1995年 | 5篇 |
1994年 | 7篇 |
1993年 | 3篇 |
1992年 | 5篇 |
1991年 | 1篇 |
1990年 | 6篇 |
1989年 | 11篇 |
1988年 | 2篇 |
1987年 | 2篇 |
1986年 | 4篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1983年 | 6篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 3篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有847条查询结果,搜索用时 0 毫秒
61.
62.
Predictive Inference for Big,Spatial, Non‐Gaussian Data: MODIS Cloud Data and its Change‐of‐Support 下载免费PDF全文
Aritra Sengupta Noel Cressie Brian H. Kahn Richard Frey 《Australian & New Zealand Journal of Statistics》2016,58(1):15-45
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given. 相似文献
63.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples. 相似文献
64.
In this paper, we consider the problem of making statistical inference for a truncated normal distribution under progressive type I interval censoring. We obtain maximum likelihood estimators of unknown parameters using the expectation-maximization algorithm and in sequel, we also compute corresponding midpoint estimates of parameters. Estimation based on the probability plot method is also considered. Asymptotic confidence intervals of unknown parameters are constructed based on the observed Fisher information matrix. We obtain Bayes estimators of parameters with respect to informative and non-informative prior distributions under squared error and linex loss functions. We compute these estimates using the importance sampling procedure. The highest posterior density intervals of unknown parameters are constructed as well. We present a Monte Carlo simulation study to compare the performance of proposed point and interval estimators. Analysis of a real data set is also performed for illustration purposes. Finally, inspection times and optimal censoring plans based on the expected Fisher information matrix are discussed. 相似文献
65.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction. 相似文献
66.
Daniel J. Benjamin 《The American statistician》2019,73(1):186-191
ABSTRACTResearchers commonly use p-values to answer the question: How strongly does the evidence favor the alternative hypothesis relative to the null hypothesis? p-Values themselves do not directly answer this question and are often misinterpreted in ways that lead to overstating the evidence against the null hypothesis. Even in the “post p?<?0.05 era,” however, it is quite possible that p-values will continue to be widely reported and used to assess the strength of evidence (if for no other reason than the widespread availability and use of statistical software that routinely produces p-values and thereby implicitly advocates for their use). If so, the potential for misinterpretation will persist. In this article, we recommend three practices that would help researchers more accurately interpret p-values. Each of the three recommended practices involves interpreting p-values in light of their corresponding “Bayes factor bound,” which is the largest odds in favor of the alternative hypothesis relative to the null hypothesis that is consistent with the observed data. The Bayes factor bound generally indicates that a given p-value provides weaker evidence against the null hypothesis than typically assumed. We therefore believe that our recommendations can guard against some of the most harmful p-value misinterpretations. In research communities that are deeply attached to reliance on “p?<?0.05,” our recommendations will serve as initial steps away from this attachment. We emphasize that our recommendations are intended merely as initial, temporary steps and that many further steps will need to be taken to reach the ultimate destination: a holistic interpretation of statistical evidence that fully conforms to the principles laid out in the ASA statement on statistical significance and p-values. 相似文献
67.
Mark A. van de Wiel Dennis E. Te Beest Magnus M. Münch 《Scandinavian Journal of Statistics》2019,46(1):2-25
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval. 相似文献
68.
69.
Ying-Ying Zhang 《统计学通讯:理论与方法》2017,46(14):7125-7133
For the variance parameter of the hierarchical normal and inverse gamma model, we analytically calculate the Bayes rule (estimator) with respect to a prior distribution IG (alpha, beta) under Stein's loss function. This estimator minimizes the posterior expected Stein's loss (PESL). We also analytically calculate the Bayes rule and the PESL under the squared error loss. Finally, the numerical simulations exemplify that the PESLs depend only on alpha and the number of observations. The Bayes rules and PESLs under Stein's loss are unanimously smaller than those under the squared error loss. 相似文献
70.
Bernhard Rieder 《Information, Communication & Society》2017,20(1):100-117
ABSTRACTThis paper outlines the notion of ‘algorithmic technique’ as a middle ground between concrete, implemented algorithms and the broader study and theorization of software. Algorithmic techniques specify principles and methods for doing things in the medium of software and they thus constitute units of knowledge and expertise in the domain of software making. I suggest that algorithmic techniques are a suitable object of study for the humanities and social science since they capture the central technical principles behind actual software, but can generally be described in accessible language. To make my case, I focus on the field of information ordering and, first, discuss the wider historical trajectory of formal or ‘mechanical’ reasoning applied to matters of commerce and government before, second, moving to the investigation of a particular algorithmic technique, the Bayes classifier. This technique is explicated through a reading of the original work of M. E. Maron in the early 1960 and presented as a means to subject empirical, ‘datafied’ reality to an interested reading that confers meaning to each variable in relation to an operational goal. After a discussion of the Bayes classifier in relation to the question of power, the paper concludes by coming back to its initial motive and argues for increased attention to algorithmic techniques in the study of software. 相似文献