首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1825篇
  免费   56篇
  国内免费   6篇
管理学   104篇
人口学   20篇
丛书文集   27篇
理论方法论   45篇
综合类   367篇
社会学   43篇
统计学   1281篇
  2024年   1篇
  2023年   17篇
  2022年   18篇
  2021年   23篇
  2020年   42篇
  2019年   71篇
  2018年   68篇
  2017年   119篇
  2016年   63篇
  2015年   57篇
  2014年   60篇
  2013年   410篇
  2012年   179篇
  2011年   64篇
  2010年   56篇
  2009年   59篇
  2008年   43篇
  2007年   55篇
  2006年   46篇
  2005年   61篇
  2004年   39篇
  2003年   40篇
  2002年   41篇
  2001年   36篇
  2000年   31篇
  1999年   30篇
  1998年   34篇
  1997年   24篇
  1996年   18篇
  1995年   17篇
  1994年   13篇
  1993年   10篇
  1992年   11篇
  1991年   6篇
  1990年   9篇
  1989年   5篇
  1988年   1篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1981年   2篇
  1980年   3篇
  1978年   1篇
  1975年   1篇
排序方式: 共有1887条查询结果,搜索用时 15 毫秒
1.
Abstract

Characterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included.  相似文献   
2.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   
3.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne.  相似文献   
4.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
5.
The product of two independent or dependent scalar normal variables, sums of products, sample covariances, and general bilinear forms are considered. Their distributions are shown to belong to a class called generalized Laplacian. A growth-decay mechanism is also shown to produce such a generalized Laplacian. Sets of necessary and sufficient conditions are derived for bilinear forms to belong to this class. As a generalization, the distributions of rectangular matrices associated with multivariate normal random vectors are also discussed.  相似文献   
6.
Complete and partial diallel cross designs are examined as to their construction and robustness against the loss of a block of observations. A simple generalized inverse is found for the information matrix of the line effects, which allows evaluation of expressions for the variances of the line-effect differences with and without the missing block. A-efficiencies, based on average variances of the elementary contrasts of the line-effects, suggest that these designs are fairly robust. The loss of efficiency is generally less than 10%, but it is shown that specific comparisons might suffer a loss of efficiency of as much as 40%.  相似文献   
7.
Many applications of nonparametric tests based on curve estimation involve selecting a smoothing parameter. The author proposes an adaptive test that combines several generalized likelihood ratio tests in order to get power performance nearly equal to whichever of the component tests is best. She derives the asymptotic joint distribution of the component tests and that of the proposed test under the null hypothesis. She also develops a simple method of selecting the smoothing parameters for the proposed test and presents two approximate methods for obtaining its P‐value. Finally, she evaluates the proposed test through simulations and illustrates its application to a set of real data.  相似文献   
8.
在差异化服务机制的基础上,研究了互联网的多级带宽定价问题.通过导入广义第二价格拍卖,构造了基于差异化服务的互联网多级带宽定价模型.该模型能够有效地解决Internet拥塞问题,促进网络资源的合理利用.  相似文献   
9.
Summary.  Factor analysis is a powerful tool to identify the common characteristics among a set of variables that are measured on a continuous scale. In the context of factor analysis for non-continuous-type data, most applications are restricted to item response data only. We extend the factor model to accommodate ranked data. The Monte Carlo expectation–maximization algorithm is used for parameter estimation at which the E-step is implemented via the Gibbs sampler. An analysis based on both complete and incomplete ranked data (e.g. rank the top q out of k items) is considered. Estimation of the factor scores is also discussed. The method proposed is applied to analyse a set of incomplete ranked data that were obtained from a survey that was carried out in GuangZhou, a major city in mainland China, to investigate the factors affecting people's attitude towards choosing jobs.  相似文献   
10.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号