全文获取类型
收费全文 | 10459篇 |
免费 | 3篇 |
专业分类
管理学 | 1520篇 |
民族学 | 103篇 |
人口学 | 2414篇 |
理论方法论 | 487篇 |
综合类 | 286篇 |
社会学 | 4470篇 |
统计学 | 1182篇 |
出版年
2023年 | 1篇 |
2022年 | 1篇 |
2021年 | 3篇 |
2020年 | 2篇 |
2019年 | 10篇 |
2018年 | 1663篇 |
2017年 | 1653篇 |
2016年 | 1081篇 |
2015年 | 36篇 |
2014年 | 35篇 |
2013年 | 40篇 |
2012年 | 324篇 |
2011年 | 1148篇 |
2010年 | 1045篇 |
2009年 | 782篇 |
2008年 | 818篇 |
2007年 | 996篇 |
2006年 | 3篇 |
2005年 | 224篇 |
2004年 | 250篇 |
2003年 | 210篇 |
2002年 | 81篇 |
2001年 | 5篇 |
2000年 | 10篇 |
1999年 | 5篇 |
1996年 | 28篇 |
1988年 | 8篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Michael S. Rendall Ryan Admiraal Alessandra DeRose Paola DiGiulio Mark S. Handcock Filomena Racioppi 《Statistical Methods and Applications》2008,17(4):519-539
In non-experimental research, data on the same population process may be collected simultaneously by more than one instrument. For example, in the present application, two sample surveys and a population birth registration system all collect observations on first births by age and year, while the two surveys additionally collect information on women’s education. To make maximum use of the three data sources, the survey data are pooled and the population data introduced as constraints in a logistic regression equation. Reductions in standard errors about the age and birth-cohort parameters of the regression equation in the order of three-quarters are obtained by introducing the population data as constraints. A halving of the standard errors about the education parameters is achieved by pooling observations from the larger survey dataset with those from the smaller survey. The percentage reduction in the standard errors through imposing population constraints is independent of the total survey sample size. 相似文献
992.
One of the greatest challenges related to the use of piecewise exponential models (PEMs) is to find an adequate grid of time-points needed in its construction. In general, the number of intervals in such a grid and the position of their endpoints are ad-hoc choices. We extend previous works by introducing a full Bayesian approach for the piecewise exponential model in which the grid of time-points (and, consequently, the endpoints and the number of intervals) is random. We estimate the failure rates using the proposed procedure and compare the results with the non-parametric piecewise exponential estimates. Estimates for the survival function using the most probable partition are compared with the Kaplan-Meier estimators (KMEs). A sensitivity analysis for the proposed model is provided considering different prior specifications for the failure rates and for the grid. We also evaluate the effect of different percentage of censoring observations in the estimates. An application to a real data set is also provided. We notice that the posteriors are strongly influenced by prior specifications, mainly for the failure rates parameters. Thus, the priors must be fairly built, say, really disclosing the expert prior opinion. 相似文献
993.
994.
Unfortunately many of the numerous algorithms for computing the comulative distribution function (cdf) and noncentrality parameter
of the noncentral F and beta distributions can produce completely incorrect results as demonstrated in the paper by examples. Existing algorithms
are scrutinized and those parts that involve numerical difficulties are identified. As a result, a pseudo code is presented
in which all the known numerical problems are resolved. This pseudo code can be easily implemented in programming language
C or FORTRAN without understanding the complicated mathematical background.
Symbolic evaluation of a finite and closed formula is proposed to compute exact cdf values. This approach makes it possible
to check quickly and reliably the values returned by professional statistical packages over an extraordinarily wide parameter
range without any programming knowledge.
This research was motivated by the fact that a very useful table for calculating the size of detectable effects for ANOVA
tables contains suspect values in the region of large noncentrality parameter values compared to the values obtained by Patnaik’s
2-moment central-F approximation. The cause is identified and the corrected form of the table for ANOVA purposes is given. The accuracy of the
approximations to the noncentral-F distribution is also discussed.
The authors wish to thank Mr. Richárd Király for his preliminary work. The authors are grateful to the Editor and Associate
Editor of STCO and the unknown reviewers for their helpful suggestions. 相似文献
995.
A tutorial on adaptive MCMC 总被引:1,自引:0,他引:1
We review adaptive Markov chain Monte Carlo algorithms (MCMC) as a mean to optimise their performance. Using simple toy examples we review their theoretical underpinnings, and in particular show why adaptive MCMC algorithms might fail when some fundamental properties are not satisfied. This leads to guidelines concerning the design of correct algorithms. We then review criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria, but also analyse the properties of adaptive MCMC algorithms. We then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice. These algorithms are applied to artificial and high dimensional scenarios, but also to the classic mine disaster dataset inference problem. 相似文献
996.
Michael Kohler 《AStA Advances in Statistical Analysis》2008,92(2):153-178
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called continuation values, which we represent as regression functions defined recursively by using the continuation values of the next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning consistency and the estimates’ rate of convergence. 相似文献
997.
In this work a new type of logits and odds ratios, which includes as special cases the continuation and the reverse-continuation logits and odds ratios, are defined. We prove that these logits and odds ratios define a parameterization of the joint probabilities of a two way contingency table. The problem of testing equality and inequality constraints on these logits and odds ratios is examined with particular regard to two new hypotheses of monotone dependence. Work partially supported by a MIUR2004 grant. Preliminary findings have been presented at SIS (Società Italiana di Statistica) Annual Meeting, Torino, 2006. 相似文献
998.
Jörg Drechsler Agnes Dundler Stefan Bender Susanne Rässler Thomas Zwick 《AStA Advances in Statistical Analysis》2008,92(4):439-458
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants. 相似文献
999.
In this paper we determine the Gauss–Markov predictor of the nonobservable part of a random vector satisfying the linear model under a linear constraint. 相似文献
1000.
We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s. 相似文献