全文获取类型
收费全文 | 1179篇 |
免费 | 19篇 |
专业分类
管理学 | 20篇 |
人口学 | 4篇 |
丛书文集 | 8篇 |
理论方法论 | 4篇 |
综合类 | 32篇 |
社会学 | 16篇 |
统计学 | 1114篇 |
出版年
2023年 | 4篇 |
2022年 | 5篇 |
2021年 | 7篇 |
2020年 | 21篇 |
2019年 | 53篇 |
2018年 | 42篇 |
2017年 | 79篇 |
2016年 | 21篇 |
2015年 | 31篇 |
2014年 | 53篇 |
2013年 | 376篇 |
2012年 | 145篇 |
2011年 | 33篇 |
2010年 | 30篇 |
2009年 | 26篇 |
2008年 | 45篇 |
2007年 | 30篇 |
2006年 | 16篇 |
2005年 | 27篇 |
2004年 | 19篇 |
2003年 | 16篇 |
2002年 | 12篇 |
2001年 | 18篇 |
2000年 | 10篇 |
1999年 | 11篇 |
1998年 | 9篇 |
1997年 | 6篇 |
1996年 | 7篇 |
1995年 | 5篇 |
1994年 | 1篇 |
1993年 | 4篇 |
1992年 | 3篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1989年 | 3篇 |
1988年 | 5篇 |
1986年 | 1篇 |
1985年 | 3篇 |
1984年 | 4篇 |
1983年 | 2篇 |
1982年 | 3篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 5篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有1198条查询结果,搜索用时 15 毫秒
1.
CHIN-TSANG CHIANG MEI-CHENG WANG CHIUNG-YU HUANG 《Scandinavian Journal of Statistics》2005,32(1):77-91
Abstract. Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users. 相似文献
2.
Michael P. Fay Ji-Hyun Lee 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2006,169(1):81-96
Summary. We detail a general method for measuring agreement between two statistics. An application is two ratios of directly standardized rates which differ only by the choice of the standard. If the statistics have a high value for the coefficient of agreement then the expected squared difference between the statistics is small relative to the variance of the average of the two statistics, and inferences vary little by changing statistics. The estimation of a coefficient of agreement between two statistics is not straightforward because there is only one pair of observed values, each statistic calculated from the data. We introduce estimators of the coefficient of agreement for two statistics and discuss their use, especially as applied to functions of standardized rates. 相似文献
3.
John D. Emerson David C. Hoaglin Frederick Mosteller 《Statistical Methods and Applications》1993,2(3):269-290
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects
model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights
the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study
variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however,
the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that
uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that
available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation
experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also
included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends
on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual
data and the differences among the results.
This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University. 相似文献
4.
Generalized additive models for location, scale and shape 总被引:10,自引:0,他引:10
R. A. Rigby D. M. Stasinopoulos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):507-554
Summary. A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. 相似文献
5.
Jerald F. Lawless 《Revue canadienne de statistique》2004,32(3):327-331
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process. 相似文献
6.
Yen Steven T. Douglass Shaw W. Eiswerth Mark E. 《Review of Economics of the Household》2004,2(1):73-88
Asthma patients' health status may be especially sensitive to some types of air pollution, but the evidence on this is mixed. We explore the effects of ground-level ozone on asthma patient's activities, breaking apart the usual aggregated category of leisure into indoor and outdoor activities, and differentiating those by whether the activities were active or inactive. Applying the semiparametric censored estimation method we demonstrate that even though the period over which activities were observed was relatively low in ozone levels, there is a significant impact of ozone on a few activities. The (non-ozone) economic and demographic variables in the model play significant roles in explaining the allocation of time among seven activities, suggesting the suitability of the approach for other household decision-making contexts. 相似文献
7.
Tommi Härkänen Hannu Hausen Jorma I. Virtanen Elja Arjas 《Scandinavian Journal of Statistics》2003,30(3):523-533
Abstract A model is introduced here for multivariate failure time data arising from heterogenous populations. In particular, we consider a situation in which the failure times of individual subjects are often temporally clustered, so that many failures occur during a relatively short age interval. The clustering is modelled by assuming that the subjects can be divided into ‘internally homogenous’ latent classes, each such class being then described by a time‐dependent frailty profile function. As an example, we reanalysed the dental caries data presented earlier in Härkänen et al. [Scand. J. Statist. 27 (2000) 577], as it turned out that our earlier model could not adequately describe the observed clustering. 相似文献
8.
James M. Robins 《Lifetime data analysis》1995,1(3):241-254
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count. 相似文献
9.
Given d>2 and a set of n grid points Q in ℜ
d
, we design a randomized algorithm that finds a w-wide separator, which is determined by a hyper-plane, in
sublinear time such that Q has at most
points on either side of the hyper-plane, and at most
points within
distance to the hyper-plane, where c
d
is a constant for fixed d. In particular, c
3=1.209. To our best knowledge, this is the first sublinear time algorithm for finding geometric separators. Our 3D separator
is applied to derive an algorithm for the protein side-chain packing problem, which improves and simplifies the previous algorithm
of Xu (Research in computational molecular biology, 9th annual international conference, pp. 408–422, 2005).
This research is supported by Louisiana Board of Regents fund under contract number LEQSF(2004-07)-RD-A-35.
The part of this research was done while Bin Fu was associated with the Department of Computer Science, University of New
Orleans, LA 70148, USA and the Research Institute for Children, 200 Henry Clay Avenue, New Orleans, LA 70118, USA. 相似文献
10.
It is shown that the uncertainty connected with a `random in a broad sense' (not necessarily stochastic) event always has some `statistical regularity' (SR) in the form of a family of finite-additive probability distributions. The specific principle of guaranteed result in decision making is introduced. It is shown that observing this principle of guaranteed result leads to determine the one optimality criterion corresponding to a decision system with a given `statistical regularity'. 相似文献