首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10683篇
  免费   6篇
管理学   1542篇
民族学   104篇
人口学   2417篇
丛书文集   5篇
理论方法论   505篇
综合类   365篇
社会学   4516篇
统计学   1235篇
  2024年   6篇
  2023年   6篇
  2022年   5篇
  2021年   9篇
  2020年   13篇
  2019年   23篇
  2018年   1674篇
  2017年   1663篇
  2016年   1086篇
  2015年   45篇
  2014年   44篇
  2013年   57篇
  2012年   332篇
  2011年   1153篇
  2010年   1046篇
  2009年   785篇
  2008年   820篇
  2007年   1001篇
  2006年   3篇
  2005年   230篇
  2004年   255篇
  2003年   222篇
  2002年   107篇
  2001年   18篇
  2000年   25篇
  1999年   7篇
  1998年   1篇
  1997年   2篇
  1996年   30篇
  1993年   3篇
  1990年   1篇
  1989年   1篇
  1988年   8篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
  1979年   2篇
  1978年   1篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
One of the greatest challenges related to the use of piecewise exponential models (PEMs) is to find an adequate grid of time-points needed in its construction. In general, the number of intervals in such a grid and the position of their endpoints are ad-hoc choices. We extend previous works by introducing a full Bayesian approach for the piecewise exponential model in which the grid of time-points (and, consequently, the endpoints and the number of intervals) is random. We estimate the failure rates using the proposed procedure and compare the results with the non-parametric piecewise exponential estimates. Estimates for the survival function using the most probable partition are compared with the Kaplan-Meier estimators (KMEs). A sensitivity analysis for the proposed model is provided considering different prior specifications for the failure rates and for the grid. We also evaluate the effect of different percentage of censoring observations in the estimates. An application to a real data set is also provided. We notice that the posteriors are strongly influenced by prior specifications, mainly for the failure rates parameters. Thus, the priors must be fairly built, say, really disclosing the expert prior opinion.  相似文献   
992.
993.
Unfortunately many of the numerous algorithms for computing the comulative distribution function (cdf) and noncentrality parameter of the noncentral F and beta distributions can produce completely incorrect results as demonstrated in the paper by examples. Existing algorithms are scrutinized and those parts that involve numerical difficulties are identified. As a result, a pseudo code is presented in which all the known numerical problems are resolved. This pseudo code can be easily implemented in programming language C or FORTRAN without understanding the complicated mathematical background. Symbolic evaluation of a finite and closed formula is proposed to compute exact cdf values. This approach makes it possible to check quickly and reliably the values returned by professional statistical packages over an extraordinarily wide parameter range without any programming knowledge. This research was motivated by the fact that a very useful table for calculating the size of detectable effects for ANOVA tables contains suspect values in the region of large noncentrality parameter values compared to the values obtained by Patnaik’s 2-moment central-F approximation. The cause is identified and the corrected form of the table for ANOVA purposes is given. The accuracy of the approximations to the noncentral-F distribution is also discussed. The authors wish to thank Mr. Richárd Király for his preliminary work. The authors are grateful to the Editor and Associate Editor of STCO and the unknown reviewers for their helpful suggestions.  相似文献   
994.
A tutorial on adaptive MCMC   总被引:1,自引:0,他引:1  
We review adaptive Markov chain Monte Carlo algorithms (MCMC) as a mean to optimise their performance. Using simple toy examples we review their theoretical underpinnings, and in particular show why adaptive MCMC algorithms might fail when some fundamental properties are not satisfied. This leads to guidelines concerning the design of correct algorithms. We then review criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria, but also analyse the properties of adaptive MCMC algorithms. We then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice. These algorithms are applied to artificial and high dimensional scenarios, but also to the classic mine disaster dataset inference problem.  相似文献   
995.
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants.  相似文献   
996.
In this paper we determine the Gauss–Markov predictor of the nonobservable part of a random vector satisfying the linear model under a linear constraint.  相似文献   
997.
We propose a parametric test for bimodality based on the likelihood principle by using two-component mixtures. The test uses explicit characterizations of the modal structure of such mixtures in terms of their parameters. Examples include the univariate and multivariate normal distributions and the von Mises distribution. We present the asymptotic distribution of the proposed test and analyze its finite sample performance in a simulation study. To illustrate our method, we use mixtures to investigate the modal structure of the cross-sectional distribution of per capita log GDP across EU regions from 1977 to 1993. Although these mixtures clearly have two components over the whole time period, the resulting distributions evolve from bimodality toward unimodality at the end of the 1970s.  相似文献   
998.
We consider efficient estimation of regression and association parameters jointly for bivariate current status data with the marginal proportional hazards model. Current status data occur in many fields including demographical studies and tumorigenicity experiments and several approaches have been proposed for regression analysis of univariate current status data. We discuss bivariate current status data and propose an efficient score estimation approach for the problem. In the approach, the copula model is used for joint survival function with the survival times assumed to follow the proportional hazards model marginally. Simulation studies are performed to evaluate the proposed estimates and suggest that the approach works well in practical situations. A real life data application is provided for illustration.  相似文献   
999.
Absolute risk is the chance that a person with given risk factors and free of the disease of interest at age a will be diagnosed with that disease in the interval (a, a + τ]. Absolute risk is sometimes called cumulative incidence. Absolute risk is a “crude” risk because it is reduced by the chance that the person will die of competing causes of death before developing the disease of interest. Cohort studies admit flexibility in modeling absolute risk, either by allowing covariates to affect the cause-specific relative hazards or to affect the absolute risk itself. An advantage of cause-specific relative risk models is that various data sources can be used to fit the required components. For example, case–control data can be used to estimate relative risk and attributable risk, and these can be combined with registry data on age-specific composite hazard rates for the disease of interest and with national data on competing hazards of mortality to estimate absolute risk. Family-based designs, such as the kin-cohort design and collections of pedigrees with multiple affected individuals can be used to estimate the genotype-specific hazard of disease. Such analyses must be adjusted for ascertainment, and failure to take into account residual familial risk, such as might be induced by unmeasured genetic variants or by unmeasured behavioral or environmental exposures that are correlated within families, can lead to overestimates of mutation-specific absolute risk in the general population.  相似文献   
1000.
This paper deals with small area indirect estimators under area level random effect models when only area level data are available and the random effects are correlated. The performance of the Spatial Empirical Best Linear Unbiased Predictor (SEBLUP) is explored with a Monte Carlo simulation study on lattice data and it is applied to the results of the sample survey on Life Conditions in Tuscany (Italy). The mean squared error (MSE) problem is discussed illustrating the MSE estimator in comparison with the MSE of the empirical sampling distribution of SEBLUP estimator. A clear tendency in our empirical findings is that the introduction of spatially correlated random area effects reduce both the variance and the bias of the EBLUP estimator. Despite some residual bias, the coverage rate of our confidence intervals comes close to a nominal 95%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号