首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1385篇
  免费   37篇
  国内免费   7篇
管理学   60篇
人才学   1篇
人口学   1篇
丛书文集   30篇
理论方法论   11篇
综合类   276篇
社会学   20篇
统计学   1030篇
  2023年   12篇
  2022年   12篇
  2021年   12篇
  2020年   19篇
  2019年   48篇
  2018年   51篇
  2017年   79篇
  2016年   38篇
  2015年   19篇
  2014年   46篇
  2013年   246篇
  2012年   89篇
  2011年   62篇
  2010年   49篇
  2009年   63篇
  2008年   47篇
  2007年   61篇
  2006年   60篇
  2005年   57篇
  2004年   52篇
  2003年   44篇
  2002年   38篇
  2001年   40篇
  2000年   32篇
  1999年   19篇
  1998年   19篇
  1997年   22篇
  1996年   8篇
  1995年   13篇
  1994年   7篇
  1993年   7篇
  1992年   8篇
  1991年   10篇
  1990年   4篇
  1989年   1篇
  1988年   8篇
  1987年   5篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   5篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有1429条查询结果,搜索用时 125 毫秒
191.
ABSTRACT

We propose a generalization of the one-dimensional Jeffreys' rule in order to obtain non informative prior distributions for non regular models, taking into account the comments made by Jeffreys in his article of 1946. These non informatives are parameterization invariant and the Bayesian intervals have good behavior in frequentist inference. In some important cases, we can generate non informative distributions for multi-parameter models with non regular parameters. In non regular models, the Bayesian method offers a satisfactory solution to the inference problem and also avoids the problem that the maximum likelihood estimator has with these models. Finally, we obtain non informative distributions in job-search and deterministic frontier production homogenous models.  相似文献   
192.
ABSTRACT

In this article, causal inference in randomized studies with recurrent events data and all-or-none compliance is considered. We use the counting process to analyze the recurrent events data and propose a causal proportional intensity model. The maximum likelihood approach is adopted to estimate the parameters of the proposed causal model. To overcome the computational difficulties created by the mixture structure of the problem, we develop an expectation-maximization (EM) algorithm. The resulting estimators are shown to be consistent and asymptotically normal. We further estimate the complier average causal effect (CACE), which is defined as the difference of the average numbers of recurrence between treatment and control groups within the complier class. The corresponding inferential procedures are established. Some simulation studies are conducted to assess the finite sample performance of the proposed approach.  相似文献   
193.
For a linear regression model over m populations with separate regression coefficients but a common error variance, a Bayesian model is employed to obtain regression coefficient estimates which are shrunk toward an overall value. The formulation uses Normal priors on the coefficients and diffuse priors on the grand mean vectors, the error variance, and the between-to-error variance ratios. The posterior density of the parameters which were given diffuse priors is obtained. From this the posterior means and variances of regression coefficients and the predictive mean and variance of a future observation are obtained directly by numerical integration in the balanced case, and with the aid of series expansions in the approximately balanced case. An example is presented and worked out for the case of one predictor variable. The method is an extension of Box & Tiao's Bayesian estimation of means in the balanced one-way random effects model.  相似文献   
194.
In designing a study to compare two lifetime distributions, decisions are required about the study size, the proportion of observations in each group and the length of follow-up period. These aspects of study design are examined using a Bayesian approach in which the expected consequences of a particular choice of design are evaluated by the expected gain in infornlation.  相似文献   
195.
In many experiments where data have been collected at two points in time (pre-treatment and post-treatment), investigators wish to determine if there is a difference between two treatment groups. In recent years it has been proposed that an appropriate statistical analysis to determine if treatment differences exist is to use the post-treatment values as the primary comparison variables and the pre-treatment values as covariates. When there are several outcome variables, we propose new tests based on residuals as alternatives to existing methods and investigate how the powers of the new and existing tests are affected by various choices of covariates. The limiting distribution of the test statistic of the new test based on residuals is given. Monte Carlo simulations are employed in the power comparisons.  相似文献   
196.
We provide an application of a variety of predicting densities to quality control involving multivariate normal linear models. We produce optimal control designs for single muleivaiiate future observations using predicting densities employing estimative, profile likelihood, Hinkley-Lauritzen, Butler, Bayesian, and Parametric Bootstrap methodologies. The decision-theoretic optimality criterion is an intuitively appealing quadratic consumer-producer risk function. The optimal control design arising from an optimal Kullback-Leibler frequentist prediction density is shown to coincide with that arising from an optimal Kullback-Leibler Bayesian predictive density. An example involving EVOP is provided to illustrate the methodology and to raise questions concerning the relative merics of the variety of predictive approaches in the quality control context.  相似文献   
197.
In an earlier paper the authors (1997) extended the results of Hayter (1990) to the two parameter exponential probability model. This paper addressee the extention to the scale parameter case under location-scale probability model. Consider k (k≧3) treatments or competing firms such that an observation from with treatment or firm follows a distribution with cumulative distribution function (cdf) Fi(x)=F[(x-μi)/Qi], where F(·) is any absolutely continuous cdf, i=1,…,k. We propose a test to test the null hypothesis H01=…=θk against the simple ordered alternative H11≦…≦θk, with at least one strict inequality, using the data Xi,j, i=1,…k; j=1,…,n1. Two methods to compute the critical points of the proposed test have been demonstrated by talking k two parameter exponential distributions. The test procedure also allows us to construct simultaneous one sided confidence intervals (SOCIs) for the ordered pairwise ratios θji, 1≦i<j≦k. Statistical simulation revealed that: 9i) actual sizes of the critical points are almost conservative and (ii) power of the proposed test relative to some existing tests is higher.  相似文献   
198.
The Bonferroni t-statistic is a versatile tool in multiple comparisons problems. The need for "oddball percentage points" may lead to extensive tables or heavy computation. Charts of tp as a function of log p enable near two-decimal accuracy for any percentage point between .01 and .00001  相似文献   
199.
ABSTRACT

In queuing theory, a major interest of researchers is studying the behavior and formation process and analyzing the performance characteristics of queues, particularly the traffic intensity, which is defined as the ratio between the arrival rate and the service rate. How these parameters can be estimated using some statistical inferential method is the mathematical problem treated here. This article aims to obtain better Bayesian estimates for the traffic intensity of M/M/1 queues, which, in Kendall notation, stand for Markovian single-server infinity queues. The Jeffreys prior is proposed to obtain the posterior and predictive distributions of some parameters of interest. Samples are obtained through simulation and some performance characteristics are analyzed. It is observed from the Bayes factor that Jeffreys prior is competitive, among informative and non-informative prior distributions, and presents the best performance in many of the cases tested.  相似文献   
200.
ABSTRACT

We propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号