全文获取类型
收费全文 | 3110篇 |
免费 | 73篇 |
专业分类
管理学 | 417篇 |
民族学 | 21篇 |
人才学 | 1篇 |
人口学 | 278篇 |
丛书文集 | 8篇 |
教育普及 | 1篇 |
理论方法论 | 367篇 |
综合类 | 26篇 |
社会学 | 1674篇 |
统计学 | 390篇 |
出版年
2023年 | 27篇 |
2022年 | 12篇 |
2021年 | 27篇 |
2020年 | 50篇 |
2019年 | 97篇 |
2018年 | 79篇 |
2017年 | 114篇 |
2016年 | 119篇 |
2015年 | 75篇 |
2014年 | 88篇 |
2013年 | 444篇 |
2012年 | 126篇 |
2011年 | 127篇 |
2010年 | 88篇 |
2009年 | 90篇 |
2008年 | 86篇 |
2007年 | 123篇 |
2006年 | 106篇 |
2005年 | 97篇 |
2004年 | 103篇 |
2003年 | 83篇 |
2002年 | 81篇 |
2001年 | 82篇 |
2000年 | 62篇 |
1999年 | 76篇 |
1998年 | 45篇 |
1997年 | 34篇 |
1996年 | 50篇 |
1995年 | 37篇 |
1994年 | 39篇 |
1993年 | 33篇 |
1992年 | 44篇 |
1991年 | 37篇 |
1990年 | 33篇 |
1989年 | 30篇 |
1988年 | 25篇 |
1987年 | 24篇 |
1986年 | 39篇 |
1985年 | 24篇 |
1984年 | 19篇 |
1983年 | 22篇 |
1982年 | 23篇 |
1981年 | 17篇 |
1980年 | 17篇 |
1979年 | 30篇 |
1978年 | 18篇 |
1976年 | 13篇 |
1975年 | 10篇 |
1974年 | 7篇 |
1973年 | 8篇 |
排序方式: 共有3183条查询结果,搜索用时 0 毫秒
81.
Andreas I. Sashegyi K. Stephen Brown Patrick J. Farrell 《Revue canadienne de statistique》2000,28(1):45-63
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given. 相似文献
82.
For regression on state and transition probabilities in multi-state models Andersen et al. (Biometrika 90:15–27, 2003) propose
a technique based on jackknife pseudo-values. In this article we analyze the pseudo-values suggested for competing risks models
and prove some conjectures regarding their asymptotics (Klein and Andersen, Biometrics 61:223–229, 2005). The key is a second
order von Mises expansion of the Aalen-Johansen estimator which yields an appropriate representation of the pseudo-values.
The method is illustrated with data from a clinical study on total joint replacement. In the application we consider for comparison
the estimates obtained with the Fine and Gray approach (J Am Stat Assoc 94:496–509, 1999) and also time-dependent solutions
of pseudo-value regression equations. 相似文献
83.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator
seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a
weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that
perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data
driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate
from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement
error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized
subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type
penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over
the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation
in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate
deconvolution density estimation in a very straightforward manner. 相似文献
84.
Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought. 相似文献
85.
86.
A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo‐observations 下载免费PDF全文
Pseudo‐values have proven very useful in censored data analysis in complex settings such as multi‐state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second‐order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo‐values still seem unclear. In this paper, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U‐statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error estimates will typically be too large, although in many practical applications the difference will be of minor importance. We show how to estimate correctly the variability of the estimator. This is further studied in some simulation studies. 相似文献
87.
Techniques used in variability assessment are subsequently used to draw conclusions regarding the “spread”/uniformity of data curves. Due to the limitations of these techniques, they are not adequate for circumstances where data manifest with multiple peaks. Examples of these manifestations (in three-dimensional space) include under-foot pressure distributions recorded for different types of footwear (Becerro-de-Bengoa-Vallejo et al., 2014; Cibulka et al., 1994; Davies et al., 2003), surface textures and interfaces designed to impact friction, and and and molecular surface structures such as viral epitopes (Torras and Garcia-Valls, 2004; Pacejka, 1997; Fustaffson, 1997). This article proposes a technique for generating a single variable – Λ that will quantify the uniformity of such surfaces. We define and validate this technique using several mathematical and graphical models. 相似文献
88.
The nonparametric two-sample bootstrap is applied to computing uncertainties of measures in receiver operating characteristic (ROC) analysis on large datasets in areas such as biometrics, speaker recognition, etc. when the analytical method cannot be used. Its validation was studied by computing the standard errors of the area under ROC curve using the well-established analytical Mann–Whitney statistic method and also using the bootstrap. The analytical result is unique. The bootstrap results are expressed as a probability distribution due to its stochastic nature. The comparisons were carried out using relative errors and hypothesis testing. These match very well. This validation provides a sound foundation for such computations. 相似文献
89.
Very little is known about the local power of second generation panel unit root tests that are robust to cross-section dependence. This article derives the local asymptotic power functions of the cross-section argumented Dickey–Fuller Cross-section Augmented Dickey-Fuller (CADF) and CIPS tests of Pesaran (2007), which are among the most popular tests around. 相似文献
90.
One in five students report experimenting with tobacco before the age of 13 and most prevention efforts take place in the school setting. This study measures the effect of a single-lesson tobacco prevention curriculum, conducted by a health education center, focusing on knowledge of tobacco, ability to identify refusal techniques, and intent not to smoke. Data were collected, via electronic keypads, from students visiting a non-school, health education center in Michigan (n = 704 intervention and 85 comparison). Contingency table Chi-squared tests and t-tests demonstrated that a single lesson can improve general knowledge and ability to identify appropriate refusal techniques. Improvement in intent not to smoke was not significant because both groups had very high intent prior to implementation. Similar to results from other programs, multivariate logistic regression of gender, general knowledge, and skill identification revealed that only the skill variable was associated with intent not to smoke at pretest. Recommendations are given for further research and for designing more effective curricula or programs. 相似文献