全文获取类型
收费全文 | 649篇 |
免费 | 7篇 |
专业分类
管理学 | 16篇 |
民族学 | 1篇 |
人口学 | 5篇 |
丛书文集 | 10篇 |
理论方法论 | 10篇 |
综合类 | 103篇 |
社会学 | 28篇 |
统计学 | 483篇 |
出版年
2023年 | 2篇 |
2021年 | 3篇 |
2020年 | 6篇 |
2019年 | 20篇 |
2018年 | 20篇 |
2017年 | 28篇 |
2016年 | 7篇 |
2015年 | 14篇 |
2014年 | 41篇 |
2013年 | 196篇 |
2012年 | 51篇 |
2011年 | 22篇 |
2010年 | 26篇 |
2009年 | 16篇 |
2008年 | 33篇 |
2007年 | 30篇 |
2006年 | 19篇 |
2005年 | 13篇 |
2004年 | 11篇 |
2003年 | 12篇 |
2002年 | 8篇 |
2001年 | 9篇 |
2000年 | 8篇 |
1999年 | 9篇 |
1998年 | 6篇 |
1997年 | 5篇 |
1996年 | 4篇 |
1995年 | 3篇 |
1993年 | 3篇 |
1992年 | 2篇 |
1990年 | 2篇 |
1989年 | 3篇 |
1988年 | 4篇 |
1985年 | 2篇 |
1984年 | 3篇 |
1983年 | 2篇 |
1982年 | 3篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 5篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有656条查询结果,搜索用时 31 毫秒
91.
《统计学通讯:模拟与计算》2013,42(3):581-595
This paper concerns maximum likelihood estimation for the semiparametric shared gamma frailty model; that is the Cox proportional hazards model with the hazard function multiplied by a gamma random variable with mean 1 and variance θ. A hybrid ML-EM algorithm is applied to 26 400 simulated samples of 400 to 8000 observations with Weibull hazards. The hybrid algorithm is much faster than the standard EM algorithm, faster than standard direct maximum likelihood (ML, Newton Raphson) for large samples, and gives almost identical results to the penalised likelihood method in S-PLUS 2000. When the true value θ0 of θ is zero, the estimates of θ are asymptotically distributed as a 50–50 mixture between a point mass at zero and a normal random variable on the positive axis. When θ0 > 0, the asymptotic distribution is normal. However, for small samples, simulations suggest that the estimates of θ are approximately distributed as an x ? (100 ? x)% mixture, 0 ≤ x ≤ 50, between a point mass at zero and a normal random variable on the positive axis even for θ0 > 0. In light of this, p-values and confidence intervals need to be adjusted accordingly. We indicate an approximate method for carrying out the adjustment. 相似文献
92.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice 相似文献
93.
The alias method of Walker is a clever, new, fast method for generating random variables from an arbitrary, specified discrete distribution. A simple probabilistic proof is given, in terms of mixtures, that the method works for any discrete distribution with a finite number of outcomes. A more efficient version of the table-generating portion of the method is described. Finally, a brief discussion on efficiency of the method is given. We believe that the generality, speed, and simplicity of the method make it attractive for use in generating discrete random variables. 相似文献
94.
Longitudinal investigations play an increasingly prominent role in biomedical research. Much of the literature on specifying and fitting linear models for serial measurements uses methods based on the standard multivariate linear model. This article proposes a more flexible approach that permits specification of the expected response as an arbitrary linear function of fixed and time-varying covariates so that mean-value functions can be derived from subject matter considerations rather than methodological constraints. Three families of models for the covariance function are discussed: multivariate, autoregressive, and random effects. Illustrations demonstrate the flexibility and utility of the proposed approach to longitudinal analysis. 相似文献
95.
We develop a novel nonparametric likelihood ratio test for independence between two random variables using a technique that is free of the common constraints of defining a given set of specific dependence structures. Our methodology revolves around an exact density-based empirical likelihood ratio test statistic that approximates in a distribution-free fashion the corresponding most powerful parametric likelihood ratio test. We demonstrate that the proposed test is very powerful in detecting general structures of dependence between two random variables, including nonlinear and/or random-effect dependence structures. An extensive Monte Carlo study confirms that the proposed test is superior to the classical nonparametric procedures across a variety of settings. The real-world applicability of the proposed test is illustrated using data from a study of biomarkers associated with myocardial infarction. Supplementary materials for this article are available online. 相似文献
96.
Inge S. Helland 《The American statistician》2013,67(4):351-356
The famous theorem of Birnbaum, stating that the likelihood principle follows from the conditionality principle together with the sufficiency principle, has caused much discussion among statisticians. Briefly, many writers dislike the consequences of the likelihood principle (among other things, confidence coefficients and levels of tests are dismissed as meaningless), but at the same time they feel that both the conditionality principle and the sufficiency principle are intuitively obvious. In the present article we give examples to show that the conditionality principle should not be taken to be of universal validity, and we discuss some consequences of these examples. 相似文献
97.
Daniel T. Voss 《The American statistician》2013,67(4):352-356
Two standard mixed models with interactions are discussed. When each is viewed in the context of superpopulation models, the mixed models controversy is resolved. The tests suggested by the expected mean squares under the constrained-parameters model are correct for testing the main effects and interactions under both the unconstrained-and constrained-parameters models. 相似文献
98.
The principal components analysis (PCA) in the frequency domain of a stationary p-dimensional time series (X n ) n∈? leads to a summarizing time series written as a linear combination series X′ n =∑ m C m ° X n?m . Therefore, we observe that, when the coefficients C m , m≠0, are close to 0, this PCA is close to the usual PCA, that is the PCA in the temporal domain. When the coefficients tend to 0, the corresponding limit is said to satisfy a property noted 𝒫, of which we will study the consequences. Finally, we will examine, for any series, the proximity between the two PCAs. 相似文献
99.
Lihong Wang 《统计学通讯:模拟与计算》2013,42(1):48-61
We consider the estimation of a change point or discontinuity in a regression function for random design model with long memory errors. We provide several change-point estimators and investigate the consistency of the estimators. Using the fractional ARIMA process as an example of long memory process, we report a small Monte Carlo experiment to compare the performance of the estimators in finite samples. We finish by applying the method to a climatological data example. 相似文献
100.
Charles South Ryan Elmore Andrew Clarage Rob Sickorez Jing Cao 《The American statistician》2019,73(2):179-185
Fantasy sports, particularly the daily variety in which new lineups are selected each day, are a rapidly growing industry. The two largest companies in the daily fantasy business, DraftKings and Fanduel, have been valued as high as $2 billion. This research focuses on the development of a complete system for daily fantasy basketball, including both the prediction of player performance and the construction of a team. First, a Bayesian random effects model is used to predict an aggregate measure of daily NBA player performance. The predictions are then used to construct teams under the constraints of the game, typically related to a fictional salary cap and player positions. Permutation based and K-nearest neighbors approaches are compared in terms of the identification of “successful” teams—those who would be competitive more often than not based on historical data. We demonstrate the efficacy of our system by comparing our predictions to those from a well-known analytics website, and by simulating daily competitions over the course of the 2015–2016 season. Our results show an expected profit of approximately $9,000 on an initial $500 investment using the K-nearest neighbors approach, a 36% increase relative to using the permutation-based approach alone. Supplementary materials for this article are available online. 相似文献