全文获取类型
收费全文 | 1820篇 |
免费 | 59篇 |
专业分类
管理学 | 264篇 |
民族学 | 15篇 |
人口学 | 154篇 |
丛书文集 | 10篇 |
理论方法论 | 227篇 |
综合类 | 13篇 |
社会学 | 949篇 |
统计学 | 247篇 |
出版年
2023年 | 19篇 |
2022年 | 10篇 |
2021年 | 9篇 |
2020年 | 47篇 |
2019年 | 66篇 |
2018年 | 68篇 |
2017年 | 92篇 |
2016年 | 71篇 |
2015年 | 65篇 |
2014年 | 66篇 |
2013年 | 277篇 |
2012年 | 69篇 |
2011年 | 85篇 |
2010年 | 61篇 |
2009年 | 70篇 |
2008年 | 62篇 |
2007年 | 53篇 |
2006年 | 61篇 |
2005年 | 53篇 |
2004年 | 59篇 |
2003年 | 44篇 |
2002年 | 60篇 |
2001年 | 37篇 |
2000年 | 37篇 |
1999年 | 34篇 |
1998年 | 26篇 |
1997年 | 25篇 |
1996年 | 31篇 |
1995年 | 13篇 |
1994年 | 17篇 |
1993年 | 15篇 |
1992年 | 23篇 |
1991年 | 15篇 |
1990年 | 17篇 |
1989年 | 7篇 |
1988年 | 8篇 |
1987年 | 13篇 |
1986年 | 5篇 |
1985年 | 8篇 |
1984年 | 9篇 |
1983年 | 15篇 |
1982年 | 6篇 |
1981年 | 3篇 |
1980年 | 6篇 |
1979年 | 8篇 |
1978年 | 4篇 |
1977年 | 4篇 |
1975年 | 5篇 |
1974年 | 3篇 |
1970年 | 3篇 |
排序方式: 共有1879条查询结果,搜索用时 15 毫秒
21.
John Whitehead Susan Todd & W. J. Hall 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(4):731-745
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis. 相似文献
22.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed. 相似文献
23.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice. 相似文献
24.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献
25.
Christopher R. Heathcote Borek D. Puza Steven P. Roberts 《Australian & New Zealand Journal of Statistics》2009,51(4):481-497
We consider two related aspects of the study of old‐age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub‐populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause‐of‐death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub‐populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz‐like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again. 相似文献
26.
The spread of an emerging infectious disease is a major public health threat. Given the uncertainties associated with vector-borne diseases, in terms of vector dynamics and disease transmission, it is critical to develop statistical models that address how and when such an infectious disease could spread throughout a region such as the USA. This paper considers a spatio-temporal statistical model for how an infectious disease could be carried into the USA by migratory waterfowl vectors during their seasonal migration and, ultimately, the risk of transmission of such a disease to domestic fowl. Modeling spatio-temporal data of this type is inherently difficult given the uncertainty associated with observations, complexity of the dynamics, high dimensionality of the underlying process, and the presence of excessive zeros. In particular, the spatio-temporal dynamics of the waterfowl migration are developed by way of a two-tiered functional temporal and spatial dimension reduction procedure that captures spatial and seasonal trends, as well as regional dynamics. Furthermore, the model relates the migration to a population of poultry farms that are known to be susceptible to such diseases, and is one of the possible avenues toward transmission to domestic poultry and humans. The result is a predictive distribution of those counties containing poultry farms that are at the greatest risk of having the infectious disease infiltrate their flocks assuming that the migratory population was infected. The model naturally fits into the hierarchical Bayesian framework. 相似文献
27.
We show that the correlation between the estimates of two parameters is almost unchanged if they are each transformed in an arbitrary way. To be more specific, the correlation of two estimates is invariant (except for a possible sign change) up to a first order approximation, to smooth transformations of the estimates. There is a sign change if exactly one of the transformations is decreasing in a neighborhood of its parameter. In addition, we approximate the variance, covariance and correlation between functions of sample means and moments. 相似文献
28.
Christopher S. Withers 《Statistics》2013,47(1):159-166
H. Kres Statistisehe Tafeln zur multlvariaten Analysis. Springer-Verlag, Berlin- Heidel-berg-New York 1975, XVIII, 431 S., 26 Tab., DM 48. D. Rasch: Einführung in die mathematische Statistik - WahrscheinUcllkeitsrechnung und Grundlagen der mathematlsehan Statistlk. VEB Deutscher Verlag delr Wissenschaften, Berlin 1976, 371 S., 37 Abb., 46 'I'ab., 40,– M. D. Rasch: Einführung in die muthematisehe Statlstik - II .Anweuduugen, VEB Deutscher Verlag der Wissenschaften, Berlin 1976. Donald L. Snyder: Random Point Processes. -JohnWiley &; Sons, New York 1975,485 S. 相似文献
29.
An internal pilot with interim analysis (IPIA) design combines interim power analysis (an internal pilot) with interim data analysis (two-stage group sequential). We provide IPIA methods for single df hypotheses within the Gaussian general linear model, including one and two group t tests. The design allows early stopping for efficacy and futility while also re-estimating sample size based on an interim variance estimate. Study planning in small samples requires the exact and computable forms reported here. The formulation gives fast and accurate calculations of power, Type I error rate, and expected sample size. 相似文献
30.
For the analysis of survey-weighted categorical data, one recommended method of analysis is a log-rate model. For each cell in a contingency table, the survey weights are averaged across subjects and incorporated into an offset for a loglinear model. Supposedly, one can then proceed with the analysis of unweighted observed cell counts. We provide theoretical and simulation-based evidence to show that the log-rate analysis is not an effective statistical analysis method and should not be used in general. The root of the problem is in its failure to properly account for variability in the individual weights within cells of a contingency table. This results in goodness-of-fit tests that have higher-than-nominal error rates and confidence intervals for odds ratios that have lower-than-nominal coverage. 相似文献