首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10413篇
  免费   0篇
管理学   1499篇
民族学   99篇
人口学   2407篇
理论方法论   481篇
综合类   287篇
社会学   4449篇
统计学   1191篇
  2023年   1篇
  2021年   1篇
  2018年   1659篇
  2017年   1650篇
  2016年   1073篇
  2015年   33篇
  2014年   35篇
  2013年   33篇
  2012年   319篇
  2011年   1144篇
  2010年   1043篇
  2009年   782篇
  2008年   818篇
  2007年   995篇
  2005年   226篇
  2004年   252篇
  2003年   211篇
  2002年   82篇
  2001年   5篇
  2000年   10篇
  1999年   5篇
  1996年   28篇
  1988年   8篇
排序方式: 共有10000条查询结果,搜索用时 112 毫秒
991.
Unfortunately many of the numerous algorithms for computing the comulative distribution function (cdf) and noncentrality parameter of the noncentral F and beta distributions can produce completely incorrect results as demonstrated in the paper by examples. Existing algorithms are scrutinized and those parts that involve numerical difficulties are identified. As a result, a pseudo code is presented in which all the known numerical problems are resolved. This pseudo code can be easily implemented in programming language C or FORTRAN without understanding the complicated mathematical background. Symbolic evaluation of a finite and closed formula is proposed to compute exact cdf values. This approach makes it possible to check quickly and reliably the values returned by professional statistical packages over an extraordinarily wide parameter range without any programming knowledge. This research was motivated by the fact that a very useful table for calculating the size of detectable effects for ANOVA tables contains suspect values in the region of large noncentrality parameter values compared to the values obtained by Patnaik’s 2-moment central-F approximation. The cause is identified and the corrected form of the table for ANOVA purposes is given. The accuracy of the approximations to the noncentral-F distribution is also discussed. The authors wish to thank Mr. Richárd Király for his preliminary work. The authors are grateful to the Editor and Associate Editor of STCO and the unknown reviewers for their helpful suggestions.  相似文献   
992.
A tutorial on adaptive MCMC   总被引:1,自引:0,他引:1  
We review adaptive Markov chain Monte Carlo algorithms (MCMC) as a mean to optimise their performance. Using simple toy examples we review their theoretical underpinnings, and in particular show why adaptive MCMC algorithms might fail when some fundamental properties are not satisfied. This leads to guidelines concerning the design of correct algorithms. We then review criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria, but also analyse the properties of adaptive MCMC algorithms. We then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice. These algorithms are applied to artificial and high dimensional scenarios, but also to the classic mine disaster dataset inference problem.  相似文献   
993.
American options in discrete time can be priced by solving optimal stopping problems. This can be done by computing so-called continuation values, which we represent as regression functions defined recursively by using the continuation values of the next time step. We use Monte Carlo to generate data, and then we apply smoothing spline regression estimates to estimate the continuation values from these data. All parameters of the estimate are chosen data dependent. We present results concerning consistency and the estimates’ rate of convergence.  相似文献   
994.
In this work a new type of logits and odds ratios, which includes as special cases the continuation and the reverse-continuation logits and odds ratios, are defined. We prove that these logits and odds ratios define a parameterization of the joint probabilities of a two way contingency table. The problem of testing equality and inequality constraints on these logits and odds ratios is examined with particular regard to two new hypotheses of monotone dependence. Work partially supported by a MIUR2004 grant. Preliminary findings have been presented at SIS (Società Italiana di Statistica) Annual Meeting, Torino, 2006.  相似文献   
995.
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants.  相似文献   
996.
In this paper we determine the Gauss–Markov predictor of the nonobservable part of a random vector satisfying the linear model under a linear constraint.  相似文献   
997.
We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s.  相似文献   
998.
999.
In non-experimental research, data on the same population process may be collected simultaneously by more than one instrument. For example, in the present application, two sample surveys and a population birth registration system all collect observations on first births by age and year, while the two surveys additionally collect information on women’s education. To make maximum use of the three data sources, the survey data are pooled and the population data introduced as constraints in a logistic regression equation. Reductions in standard errors about the age and birth-cohort parameters of the regression equation in the order of three-quarters are obtained by introducing the population data as constraints. A halving of the standard errors about the education parameters is achieved by pooling observations from the larger survey dataset with those from the smaller survey. The percentage reduction in the standard errors through imposing population constraints is independent of the total survey sample size.  相似文献   
1000.
Let G=(V,E) be a graph without an isolated vertex. A set DV(G) is a k -distance paired dominating set of G if D is a k-distance dominating set of G and the induced subgraph 〈D〉 has a perfect matching. The minimum cardinality of a k-distance paired dominating set for graph G is the k -distance paired domination number, denoted by γ p k (G). In this paper, we determine the exact k-distance paired domination number of generalized Petersen graphs P(n,1) and P(n,2) for all k≥1.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号