全文获取类型
收费全文 | 1385篇 |
免费 | 37篇 |
国内免费 | 7篇 |
专业分类
管理学 | 60篇 |
人才学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 30篇 |
理论方法论 | 11篇 |
综合类 | 276篇 |
社会学 | 20篇 |
统计学 | 1030篇 |
出版年
2023年 | 12篇 |
2022年 | 12篇 |
2021年 | 12篇 |
2020年 | 19篇 |
2019年 | 48篇 |
2018年 | 51篇 |
2017年 | 79篇 |
2016年 | 38篇 |
2015年 | 19篇 |
2014年 | 46篇 |
2013年 | 246篇 |
2012年 | 89篇 |
2011年 | 62篇 |
2010年 | 49篇 |
2009年 | 63篇 |
2008年 | 47篇 |
2007年 | 61篇 |
2006年 | 60篇 |
2005年 | 57篇 |
2004年 | 52篇 |
2003年 | 44篇 |
2002年 | 38篇 |
2001年 | 40篇 |
2000年 | 32篇 |
1999年 | 19篇 |
1998年 | 19篇 |
1997年 | 22篇 |
1996年 | 8篇 |
1995年 | 13篇 |
1994年 | 7篇 |
1993年 | 7篇 |
1992年 | 8篇 |
1991年 | 10篇 |
1990年 | 4篇 |
1989年 | 1篇 |
1988年 | 8篇 |
1987年 | 5篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 2篇 |
1983年 | 5篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1429条查询结果,搜索用时 15 毫秒
221.
N. Balakrishnan 《统计学通讯:模拟与计算》2015,44(3):591-613
In this paper, when a jointly Type-II censored sample arising from k independent exponential populations is available, the conditional MLEs of the k exponential mean parameters are derived. The moment generating functions and the exact densities of these MLEs are obtained using which exact confidence intervals are developed for the parameters. Moreover, approximate confidence intervals based on the asymptotic normality of the MLEs and credible confidence regions from a Bayesian viewpoint are also discussed. An empirical comparison of the exact, approximate, bootstrap, and Bayesian intervals is also made in terms of coverage probabilities. Finally, an example is presented in order to illustrate all the methods of inference developed here. 相似文献
222.
Björn Bornkamp Kaspar Rufibach Jianchang Lin Yi Liu Devan V. Mehrotra Satrajit Roychoudhury Heinz Schmidli Yue Shentu Marcel Wolbers 《Pharmaceutical statistics》2021,20(4):737-751
A randomized trial allows estimation of the causal effect of an intervention compared to a control in the overall population and in subpopulations defined by baseline characteristics. Often, however, clinical questions also arise regarding the treatment effect in subpopulations of patients, which would experience clinical or disease related events post-randomization. Events that occur after treatment initiation and potentially affect the interpretation or the existence of the measurements are called intercurrent events in the ICH E9(R1) guideline. If the intercurrent event is a consequence of treatment, randomization alone is no longer sufficient to meaningfully estimate the treatment effect. Analyses comparing the subgroups of patients without the intercurrent events for intervention and control will not estimate a causal effect. This is well known, but post-hoc analyses of this kind are commonly performed in drug development. An alternative approach is the principal stratum strategy, which classifies subjects according to their potential occurrence of an intercurrent event on both study arms. We illustrate with examples that questions formulated through principal strata occur naturally in drug development and argue that approaching these questions with the ICH E9(R1) estimand framework has the potential to lead to more transparent assumptions as well as more adequate analyses and conclusions. In addition, we provide an overview of assumptions required for estimation of effects in principal strata. Most of these assumptions are unverifiable and should hence be based on solid scientific understanding. Sensitivity analyses are needed to assess robustness of conclusions. 相似文献
223.
Many models have been proposed that relate failure times and stochastic time-varying covariates. In some of these models, failure occurs when a particular observable marker crosses a threshold level. We are interested in the more difficult, and often more realistic, situation where failure is not related deterministically to an observable marker. In this case, joint models for marker evolution and failure tend to lead to complicated calculations for characteristics such as the marginal distribution of failure time or the joint distribution of failure time and marker value at failure. This paper presents a model based on a bivariate Wiener process in which one component represents the marker and the second, which is latent (unobservable), determines the failure time. In particular, failure occurs when the latent component crosses a threshold level. The model yields reasonably simple expressions for the characteristics mentioned above and is easy to fit to commonly occurring data that involve the marker value at the censoring time for surviving cases and the marker value and failure time for failing cases. Parametric and predictive inference are discussed, as well as model checking. An extension of the model permits the construction of a composite marker from several candidate markers that may be available. The methodology is demonstrated by a simulated example and a case application. 相似文献
224.
Russell Davidson Jean‐Yves Duclos 《Econometrica : journal of the Econometric Society》2000,68(6):1435-1464
We derive the asymptotic sampling distribution of various estimators frequently used to order distributions in terms of poverty, welfare, and inequality. This includes estimators of most of the poverty indices currently in use, as well as estimators of the curves used to infer stochastic dominance of any order. These curves can be used to determine whether poverty, inequality, or social welfare is greater in one distribution than in another for general classes of indices and for ranges of possible poverty lines. We also derive the sampling distribution of the maximal poverty lines up to which we may confidently assert that poverty is greater in one distribution than in another. The sampling distribution of convenient dual estimators for the measurement of poverty is also established. The statistical results are established for deterministic or stochastic poverty lines as well as for paired or independent samples of incomes. Our results are briefly illustrated using data for four countries drawn from the Luxembourg Income Study data bases. 相似文献
225.
This study examines extensions of McNemar's Test with multinomial responses, and proposes a linear weighting scheme, based on the distance of the response change, that is applied to one of these extensions (Bowker's test). This weighted version of Bowker's test is then appropriate for ordinal response variables. A Monte Carlo simulation was conducted to examine the Type I error rate of the weighted Bowker's test for a cross-classification table based on a five-category ordinal response scale. The weighted Bowker's test was also applied to a data set involving change in student attitudes towards mathematics. The results of the weighted Bowker's test were compared with the results of Bowker's test applied to the same set of data. 相似文献
226.
Solaiman Afroughi Majid Jafari Khaledi Mehdi Ghandehari Motlagh Ebrahim Hajizadeh 《Journal of applied statistics》2011,38(12):2763-2774
The autologistic model, first introduced by Besag, is a popular tool for analyzing binary data in spatial lattices. However, no investigation was found to consider modeling of binary data clustered in uncorrelated lattices. Owing to spatial dependency of responses, the exact likelihood estimation of parameters is not possible. For circumventing this difficulty, many studies have been designed to approximate the likelihood and the related partition function of the model. So, the traditional and Bayesian estimation methods based on the likelihood function are often time-consuming and require heavy computations and recursive techniques. Some investigators have introduced and implemented data augmentation and latent variable model to reduce computational complications in parameter estimation. In this work, the spatially correlated binary data distributed in uncorrelated lattices were modeled using autologistic regression, a Bayesian inference was developed with contribution of data augmentation and the proposed models were applied to caries experiences of deciduous dents. 相似文献
227.
Manuel G. Scotto Susana M. Barbosa Andrés M. Alonso 《Journal of applied statistics》2011,38(12):2793-2804
Time series of daily mean temperature obtained from the European Climate Assessment data set is analyzed with respect to their extremal properties. A time-series clustering approach which combines Bayesian methodology, extreme value theory and classification techniques is adopted for the analysis of the regional variability of temperature extremes. The daily mean temperature records are clustered on the basis of their corresponding predictive distributions for 25-, 50- and 100-year return values. The results of the cluster analysis show a clear distinction between the highest altitude stations, for which the return values are lowest, and the remaining stations. Furthermore, a clear distinction is also found between the northernmost stations in Scandinavia and the stations in central and southern Europe. This spatial structure of the return period distributions for 25-, 50- and 100-years seems to be consistent with projected changes in the variability of temperature extremes over Europe pointing to a different behavior in central Europe than in northern Europe and the Mediterranean area, possibly related to the effect of soil moisture and land-atmosphere coupling. 相似文献
228.
TOM BRITTON THEODORE KYPRAIOS PHILIP D. O'NEILL 《Scandinavian Journal of Statistics》2011,38(3):578-599
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany. 相似文献
229.
Evangelos Evangelou Zhengyuan ZhuRichard L. Smith 《Journal of statistical planning and inference》2011,141(11):3564-3577
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented. 相似文献
230.
Brent D. Burch 《Journal of statistical planning and inference》2011,141(12):3793-3807
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model. 相似文献