首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12984篇
  免费   252篇
管理学   1770篇
民族学   73篇
人才学   1篇
人口学   1414篇
丛书文集   61篇
教育普及   1篇
理论方法论   1083篇
现状及发展   1篇
综合类   163篇
社会学   6104篇
统计学   2565篇
  2023年   109篇
  2022年   80篇
  2021年   94篇
  2020年   242篇
  2019年   322篇
  2018年   394篇
  2017年   499篇
  2016年   416篇
  2015年   250篇
  2014年   347篇
  2013年   2273篇
  2012年   509篇
  2011年   361篇
  2010年   302篇
  2009年   286篇
  2008年   319篇
  2007年   304篇
  2006年   274篇
  2005年   277篇
  2004年   264篇
  2003年   233篇
  2002年   258篇
  2001年   292篇
  2000年   297篇
  1999年   276篇
  1998年   193篇
  1997年   178篇
  1996年   171篇
  1995年   172篇
  1994年   135篇
  1993年   152篇
  1992年   176篇
  1991年   149篇
  1990年   160篇
  1989年   185篇
  1988年   152篇
  1987年   139篇
  1986年   152篇
  1985年   171篇
  1984年   169篇
  1983年   156篇
  1982年   123篇
  1981年   108篇
  1980年   103篇
  1979年   137篇
  1978年   89篇
  1977年   87篇
  1976年   87篇
  1975年   81篇
  1974年   80篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
671.
In this paper we firstly develop a Sarmanov–Lee bivariate family of distributions with the beta and gamma as marginal distributions. We obtain the linear correlation coefficient showing that, although it is not a strong family of correlation, it can be greater than the value of this coefficient in the Farlie–Gumbel–Morgenstern family. We also determine other measures for this family: the coefficient of median concordance and the relative entropy, which are analyzed by comparison with the case of independence. Secondly, we consider the problem of premium calculation in a Poisson–Lindley and exponential collective risk model, where the Sarmanov–Lee family is used as a structure function. We determine the collective and Bayes premiums whose values are analyzed when independence and dependence between the risk profiles are considered, obtaining that notable variations in premiums values are obtained even when low levels of correlation are considered.  相似文献   
672.
In statistical analysis, particularly in econometrics, it is usual to consider regression models where the dependent variable is censored (limited). In particular, a censoring scheme to the left of zero is considered here. In this article, an extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution. In the context of maximum likelihood estimation, an expression for the expected information matrix is provided, and an efficient EM-type algorithm for the estimation of the model parameters is developed. In order to know what type of variables affect the income of housewives, the results and methods are applied to a real data set. A brief review on the normal censored regression model or Tobit model is also presented.  相似文献   
673.
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management.  相似文献   
674.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
675.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   
676.
Summary.  Functional magnetic resonance imaging (FMRI) measures the physiological response of the human brain to experimentally controlled stimulation. In a periodically designed experiment it is of interest to test for a difference in the timing (phase shift) of the response between two anatomically distinct brain regions. We suggest two tests for an interregional difference in phase shift: one based on asymptotic theory and one based on bootstrapping. Whilst the two procedures differ in some of their assumptions, both tests rely on employing the large number of voxels (three-dimensional pixels) in non-activated brain regions to take account of spatial autocorrelation between voxelwise phase shift observations within the activated regions of interest. As an example we apply both tests, and their counterparts assuming spatial independence, to FMRI phase shift data that were acquired from a normal young woman during performance of a periodically designed covert verbal fluency task. We conclude that it is necessary to take account of spatial autocovariance between voxelwise FMRI time series parameter estimates such as the phase shift, and that the most promising way of achieving this is by modelling the spatial autocorrelation structure from a suitably defined base region of the image slice.  相似文献   
677.
Evaluation of trace evidence in the form of multivariate data   总被引:1,自引:0,他引:1  
Summary.  The evaluation of measurements on characteristics of trace evidence found at a crime scene and on a suspect is an important part of forensic science. Five methods of assessment for the value of the evidence for multivariate data are described. Two are based on significance tests and three on the evaluation of likelihood ratios. The likelihood ratio which compares the probability of the measurements on the evidence assuming a common source for the crime scene and suspect evidence with the probability of the measurements on the evidence assuming different sources for the crime scene and suspect evidence is a well-documented measure of the value of the evidence. One of the likelihood ratio approaches transforms the data to a univariate projection based on the first principal component. The other two versions of the likelihood ratio for multivariate data account for correlation among the variables and for two levels of variation: that between sources and that within sources. One version assumes that between-source variability is modelled by a multivariate normal distribution; the other version models the variability with a multivariate kernel density estimate. Results are compared from the analysis of measurements on the elemental composition of glass.  相似文献   
678.
The selection of an appropriate subset of explanatory variables to use in a linear regression model is an important aspect of a statistical analysis. Classical stepwise regression is often used with this aim but it could be invalidated by a few outlying observations. In this paper, we introduce a robust F-test and a robust stepwise regression procedure based on weighted likelihood in order to achieve robustness against the presence of outliers. The introduced methodology is asymptotically equivalent to the classical one when no contamination is present. Some examples and simulation are presented.  相似文献   
679.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
680.
Summary. We consider the construction of perfect samplers for posterior distributions associated with mixtures of exponential families and conjugate priors, starting with a perfect slice sampler in the spirit of Mira and co-workers. The methods rely on a marginalization akin to Rao–Blackwellization and illustrate the duality principle of Diebolt and Robert. A first approximation embeds the finite support distribution on the latent variables within a continuous support distribution that is easier to simulate by slice sampling, but we later demonstrate that the approximation can be very poor. We conclude by showing that an alternative perfect sampler based on a single backward chain can be constructed. This alternative can handle much larger sample sizes than the slice sampler first proposed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号