首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6605篇
  免费   204篇
  国内免费   68篇
管理学   279篇
劳动科学   1篇
民族学   73篇
人口学   30篇
丛书文集   758篇
理论方法论   283篇
综合类   4408篇
社会学   325篇
统计学   720篇
  2024年   21篇
  2023年   47篇
  2022年   50篇
  2021年   69篇
  2020年   88篇
  2019年   112篇
  2018年   131篇
  2017年   129篇
  2016年   126篇
  2015年   151篇
  2014年   358篇
  2013年   548篇
  2012年   424篇
  2011年   423篇
  2010年   400篇
  2009年   401篇
  2008年   396篇
  2007年   506篇
  2006年   467篇
  2005年   444篇
  2004年   390篇
  2003年   344篇
  2002年   273篇
  2001年   213篇
  2000年   127篇
  1999年   35篇
  1998年   21篇
  1997年   29篇
  1996年   31篇
  1995年   25篇
  1994年   22篇
  1993年   15篇
  1992年   8篇
  1991年   6篇
  1990年   9篇
  1989年   10篇
  1988年   5篇
  1987年   3篇
  1986年   1篇
  1985年   3篇
  1984年   6篇
  1983年   4篇
  1982年   1篇
  1981年   2篇
  1979年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有6877条查询结果,搜索用时 31 毫秒
91.
Several procedures have been proposed for testing the hypothesis that all off-diagonal elements of the correlation matrix of a multivariate normal distribution are equal. If the hypothesis of equal correlation can be accepted, it is then of interest to estimate and perhaps test hypotheses for the common correlation. In this paper, two versions of five different test statistics are compared via simulation in terms of adequacy of the normal approximation, coverage probabilities of confidence intervals, control of Type I error, and power. The results indicate that two test statistics based on the average of the Fisher z-transforms of the sample correlations should be used in most cases. A statistic based on the sample eigenvalues also gives reasonable results for confidence intervals and lower-tailed tests.  相似文献   
92.
散打之力量是散打运动中除运动技术等要素外,非常重要的另一要素,散打之力量在运动中表现出两个方面的应用,即散打之局部力量和整体力量;散打之局部力量和整体力量在比赛中交替展现,正是由于散打之局部力量和整体力量的不停变换使用及不同情况下各种力量共同作用,运动员才能在比赛中赢得胜利。散打之局部力量与整体力量的科学应用与训练不仅塑造了练习者完美的运动身形,而且培养了个人不凡的运动气质,是一项完美的体育运动。  相似文献   
93.
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence.  相似文献   
94.
Louis Anthony Cox  Jr. 《Risk analysis》2012,32(11):1919-1934
Extreme and catastrophic events pose challenges for normative models of risk management decision making. They invite development of new methods and principles to complement existing normative decision and risk analysis. Because such events are rare, it is difficult to learn about them from experience. They can prompt both too little concern before the fact, and too much after. Emotionally charged and vivid outcomes promote probability neglect and distort risk perceptions. Aversion to acting on uncertain probabilities saps precautionary action; moral hazard distorts incentives to take care; imperfect learning and social adaptation (e.g., herd‐following, group‐think) complicate forecasting and coordination of individual behaviors and undermine prediction, preparation, and insurance of catastrophic events. Such difficulties raise substantial challenges for normative decision theories prescribing how catastrophe risks should be managed. This article summarizes challenges for catastrophic hazards with uncertain or unpredictable frequencies and severities, hard‐to‐envision and incompletely described decision alternatives and consequences, and individual responses that influence each other. Conceptual models and examples clarify where and why new methods are needed to complement traditional normative decision theories for individuals and groups. For example, prospective and retrospective preferences for risk management alternatives may conflict; procedures for combining individual beliefs or preferences can produce collective decisions that no one favors; and individual choices or behaviors in preparing for possible disasters may have no equilibrium. Recent ideas for building “disaster‐resilient” communities can complement traditional normative decision theories, helping to meet the practical need for better ways to manage risks of extreme and catastrophic events.  相似文献   
95.
The paper studies five entropy tests of exponentiality using five statistics based on different entropy estimates. Critical values for various sample sizes determined by means of Monte Carlo simulations are presented for each of the test statistics. By simulation, we compare the power of these five tests for various alternatives and sample sizes.  相似文献   
96.
For the two-sample location and scale problem we propose an adaptive test which is based on so called Lepage type tests. The well known test of Lepage (1971) is a combination of the Wilcoxon test for location alternatives and the Ansari-Bradley test for scale alternatives and it behaves well for symmetric and medium-tailed distributions. For the cae of short-, medium- and long-tailed distributions we replace the Wilcoxon test and the .Ansari-Bradley test by suitable other two-sample tests for location and scale, respectively, in oder to get higher power than the classical Lepage test for such distribotions. These tests here are called Lepage type tests. in practice, however, we generally have no clear idea about the distribution having generated our data. Thus, an adaptive test should be applied which takes the the given data set inio consideration. The proposed adaptive test is based on the concept of Hogg (1974), i.e., first, to classify the unknown symmetric distribution function with respect to a measure for tailweight and second, to apply an appropriate Lepage type test for this classified type of distribution. We compare the adaptive test with the three Lepage type tests in the adaptive scheme and with the classical Lepage test as well as with other parametric and nonparametric tests. The power comparison is carried out via Monte Carlo simulation. It is shown that the adaptive test is the best one for the broad class of distributions considered.  相似文献   
97.
ABSTRACT

Recently, Risti? and Nadarajah [A new lifetime distribution. J Stat Comput Simul. 2014;84:135–150] introduced the Poisson generated family of distributions and investigated the properties of a special case named the exponentiated-exponential Poisson distribution. In this paper, we study general mathematical properties of the Poisson-X family in the context of the T-X family of distributions pioneered by Alzaatreh et al. [A new method for generating families of continuous distributions. Metron. 2013;71:63–79], which include quantile, shapes of the density and hazard rate functions, asymptotics and Shannon entropy. We obtain a useful linear representation of the family density and explicit expressions for the ordinary and incomplete moments, mean deviations and generating function. One special lifetime model called the Poisson power-Cauchy is defined and some of its properties are investigated. This model can have flexible hazard rate shapes such as increasing, decreasing, bathtub and upside-down bathtub. The method of maximum likelihood is used to estimate the model parameters. We illustrate the flexibility of the new distribution by means of three applications to real life data sets.  相似文献   
98.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   
99.
The Bayesian paradigm provides an ideal platform to update uncertainties and carry them over into the future in the presence of data. Bayesian predictive power (BPP) reflects our belief in the eventual success of a clinical trial to meet its goals. In this paper we derive mathematical expressions for the most common types of outcomes, to make the BPP accessible to practitioners, facilitate fast computations in adaptive trial design simulations that use interim futility monitoring, and propose an organized BPP-based phase II-to-phase III design framework.  相似文献   
100.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号