全文获取类型
收费全文 | 10398篇 |
免费 | 0篇 |
专业分类
管理学 | 1500篇 |
民族学 | 99篇 |
人口学 | 2407篇 |
理论方法论 | 482篇 |
综合类 | 286篇 |
社会学 | 4442篇 |
统计学 | 1182篇 |
出版年
2018年 | 1657篇 |
2017年 | 1650篇 |
2016年 | 1073篇 |
2015年 | 34篇 |
2014年 | 34篇 |
2013年 | 28篇 |
2012年 | 318篇 |
2011年 | 1144篇 |
2010年 | 1043篇 |
2009年 | 781篇 |
2008年 | 818篇 |
2007年 | 996篇 |
2005年 | 225篇 |
2004年 | 250篇 |
2003年 | 210篇 |
2002年 | 81篇 |
2001年 | 5篇 |
2000年 | 10篇 |
1999年 | 5篇 |
1996年 | 28篇 |
1988年 | 8篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Jörg Drechsler Agnes Dundler Stefan Bender Susanne Rässler Thomas Zwick 《AStA Advances in Statistical Analysis》2008,92(4):439-458
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants. 相似文献
992.
In this paper we determine the Gauss–Markov predictor of the nonobservable part of a random vector satisfying the linear model under a linear constraint. 相似文献
993.
We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s. 相似文献
994.
We consider efficient estimation of regression and association parameters jointly for bivariate current status data with the
marginal proportional hazards model. Current status data occur in many fields including demographical studies and tumorigenicity
experiments and several approaches have been proposed for regression analysis of univariate current status data. We discuss
bivariate current status data and propose an efficient score estimation approach for the problem. In the approach, the copula
model is used for joint survival function with the survival times assumed to follow the proportional hazards model marginally.
Simulation studies are performed to evaluate the proposed estimates and suggest that the approach works well in practical
situations. A real life data application is provided for illustration. 相似文献
995.
Gail MH 《Lifetime data analysis》2008,14(1):18-36
Absolute risk is the chance that a person with given risk factors and free of the disease of interest at age a will be diagnosed with that disease in the interval (a, a + τ]. Absolute risk is sometimes called cumulative incidence. Absolute risk is a “crude” risk because it is reduced by the chance
that the person will die of competing causes of death before developing the disease of interest. Cohort studies admit flexibility
in modeling absolute risk, either by allowing covariates to affect the cause-specific relative hazards or to affect the absolute
risk itself. An advantage of cause-specific relative risk models is that various data sources can be used to fit the required
components. For example, case–control data can be used to estimate relative risk and attributable risk, and these can be combined
with registry data on age-specific composite hazard rates for the disease of interest and with national data on competing
hazards of mortality to estimate absolute risk. Family-based designs, such as the kin-cohort design and collections of pedigrees
with multiple affected individuals can be used to estimate the genotype-specific hazard of disease. Such analyses must be
adjusted for ascertainment, and failure to take into account residual familial risk, such as might be induced by unmeasured
genetic variants or by unmeasured behavioral or environmental exposures that are correlated within families, can lead to overestimates
of mutation-specific absolute risk in the general population. 相似文献
996.
Small area estimation: the EBLUP estimator based on spatially correlated random area effects 总被引:1,自引:0,他引:1
This paper deals with small area indirect estimators under area level random effect models when only area level data are available and the random effects are correlated. The performance of the Spatial Empirical Best Linear Unbiased Predictor (SEBLUP) is explored with a Monte Carlo simulation study on lattice data and it is applied to the results of the sample survey on Life Conditions in Tuscany (Italy). The mean squared error (MSE) problem is discussed illustrating the MSE estimator in comparison with the MSE of the empirical sampling distribution of SEBLUP estimator. A clear tendency in our empirical findings is that the introduction of spatially correlated random area effects reduce both the variance and the bias of the EBLUP estimator. Despite some residual bias, the coverage rate of our confidence intervals comes close to a nominal 95%. 相似文献
997.
Computational methods for complex stochastic systems: a review of some alternatives to MCMC 总被引:1,自引:0,他引:1
Paul Fearnhead 《Statistics and Computing》2008,18(2):151-171
We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing convergence. Here we review three alternatives to MCMC methods: importance sampling, the forward-backward algorithm, and sequential Monte Carlo (SMC). We discuss how to design good proposal densities for importance sampling, show some of the range of models for which the forward-backward algorithm can be applied, and show how resampling ideas from SMC can be used to improve the efficiency of the other two methods. We demonstrate these methods on a range of examples, including estimating the transition density of a diffusion and of a discrete-state continuous-time Markov chain; inferring structure in population genetics; and segmenting genetic divergence data. 相似文献
998.
Saralees Nadarajah 《Statistical Papers》2008,49(2):387-389
999.
1000.
Essential graphs and largest chain graphs are well-established graphical representations of equivalence classes of directed acyclic graphs and chain graphs respectively,
especially useful in the context of model selection. Recently, the notion of a labelled block ordering of vertices
was introduced as a flexible tool for specifying subfamilies of chain graphs. In particular, both the family of directed
acyclic graphs and the family of “unconstrained” chain graphs can be specified in this way, for the appropriate choice of
. The family of chain graphs identified by a labelled block ordering of vertices is partitioned into equivalence classes each represented by means of a -essential graph. In this paper, we introduce a topological ordering of meta-arrows and use this concept to devise an efficient procedure for the construction of -essential graphs. In this way we also provide an efficient procedure for the construction of both largest chain graphs and
essential graphs. The key feature of the proposed procedure is that every meta-arrow needs to be processed only once. 相似文献