全文获取类型
收费全文 | 6230篇 |
免费 | 202篇 |
国内免费 | 61篇 |
专业分类
管理学 | 277篇 |
民族学 | 66篇 |
人口学 | 28篇 |
丛书文集 | 699篇 |
理论方法论 | 277篇 |
综合类 | 4100篇 |
社会学 | 326篇 |
统计学 | 720篇 |
出版年
2024年 | 19篇 |
2023年 | 52篇 |
2022年 | 56篇 |
2021年 | 69篇 |
2020年 | 90篇 |
2019年 | 114篇 |
2018年 | 139篇 |
2017年 | 133篇 |
2016年 | 134篇 |
2015年 | 147篇 |
2014年 | 345篇 |
2013年 | 553篇 |
2012年 | 412篇 |
2011年 | 409篇 |
2010年 | 377篇 |
2009年 | 371篇 |
2008年 | 366篇 |
2007年 | 444篇 |
2006年 | 409篇 |
2005年 | 407篇 |
2004年 | 342篇 |
2003年 | 303篇 |
2002年 | 248篇 |
2001年 | 195篇 |
2000年 | 117篇 |
1999年 | 33篇 |
1998年 | 20篇 |
1997年 | 31篇 |
1996年 | 32篇 |
1995年 | 26篇 |
1994年 | 23篇 |
1993年 | 14篇 |
1992年 | 8篇 |
1991年 | 7篇 |
1990年 | 9篇 |
1989年 | 10篇 |
1988年 | 6篇 |
1987年 | 3篇 |
1986年 | 1篇 |
1985年 | 3篇 |
1984年 | 6篇 |
1983年 | 4篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1979年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有6493条查询结果,搜索用时 15 毫秒
81.
马景卫 《中南林业科技大学学报(社会科学版)》2012,6(6):174-176
散打之力量是散打运动中除运动技术等要素外,非常重要的另一要素,散打之力量在运动中表现出两个方面的应用,即散打之局部力量和整体力量;散打之局部力量和整体力量在比赛中交替展现,正是由于散打之局部力量和整体力量的不停变换使用及不同情况下各种力量共同作用,运动员才能在比赛中赢得胜利。散打之局部力量与整体力量的科学应用与训练不仅塑造了练习者完美的运动身形,而且培养了个人不凡的运动气质,是一项完美的体育运动。 相似文献
82.
《Journal of Statistical Computation and Simulation》2012,82(3):369-381
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence. 相似文献
83.
Extreme and catastrophic events pose challenges for normative models of risk management decision making. They invite development of new methods and principles to complement existing normative decision and risk analysis. Because such events are rare, it is difficult to learn about them from experience. They can prompt both too little concern before the fact, and too much after. Emotionally charged and vivid outcomes promote probability neglect and distort risk perceptions. Aversion to acting on uncertain probabilities saps precautionary action; moral hazard distorts incentives to take care; imperfect learning and social adaptation (e.g., herd‐following, group‐think) complicate forecasting and coordination of individual behaviors and undermine prediction, preparation, and insurance of catastrophic events. Such difficulties raise substantial challenges for normative decision theories prescribing how catastrophe risks should be managed. This article summarizes challenges for catastrophic hazards with uncertain or unpredictable frequencies and severities, hard‐to‐envision and incompletely described decision alternatives and consequences, and individual responses that influence each other. Conceptual models and examples clarify where and why new methods are needed to complement traditional normative decision theories for individuals and groups. For example, prospective and retrospective preferences for risk management alternatives may conflict; procedures for combining individual beliefs or preferences can produce collective decisions that no one favors; and individual choices or behaviors in preparing for possible disasters may have no equilibrium. Recent ideas for building “disaster‐resilient” communities can complement traditional normative decision theories, helping to meet the practical need for better ways to manage risks of extreme and catastrophic events. 相似文献
84.
《Journal of Statistical Computation and Simulation》2012,82(11):1579-1592
The paper studies five entropy tests of exponentiality using five statistics based on different entropy estimates. Critical values for various sample sizes determined by means of Monte Carlo simulations are presented for each of the test statistics. By simulation, we compare the power of these five tests for various alternatives and sample sizes. 相似文献
85.
《Journal of Statistical Computation and Simulation》2012,82(1-4):287-310
For the two-sample location and scale problem we propose an adaptive test which is based on so called Lepage type tests. The well known test of Lepage (1971) is a combination of the Wilcoxon test for location alternatives and the Ansari-Bradley test for scale alternatives and it behaves well for symmetric and medium-tailed distributions. For the cae of short-, medium- and long-tailed distributions we replace the Wilcoxon test and the .Ansari-Bradley test by suitable other two-sample tests for location and scale, respectively, in oder to get higher power than the classical Lepage test for such distribotions. These tests here are called Lepage type tests. in practice, however, we generally have no clear idea about the distribution having generated our data. Thus, an adaptive test should be applied which takes the the given data set inio consideration. The proposed adaptive test is based on the concept of Hogg (1974), i.e., first, to classify the unknown symmetric distribution function with respect to a measure for tailweight and second, to apply an appropriate Lepage type test for this classified type of distribution. We compare the adaptive test with the three Lepage type tests in the adaptive scheme and with the classical Lepage test as well as with other parametric and nonparametric tests. The power comparison is carried out via Monte Carlo simulation. It is shown that the adaptive test is the best one for the broad class of distributions considered. 相似文献
86.
《Journal of Statistical Computation and Simulation》2012,82(14):2901-2921
ABSTRACTRecently, Risti? and Nadarajah [A new lifetime distribution. J Stat Comput Simul. 2014;84:135–150] introduced the Poisson generated family of distributions and investigated the properties of a special case named the exponentiated-exponential Poisson distribution. In this paper, we study general mathematical properties of the Poisson-X family in the context of the T-X family of distributions pioneered by Alzaatreh et al. [A new method for generating families of continuous distributions. Metron. 2013;71:63–79], which include quantile, shapes of the density and hazard rate functions, asymptotics and Shannon entropy. We obtain a useful linear representation of the family density and explicit expressions for the ordinary and incomplete moments, mean deviations and generating function. One special lifetime model called the Poisson power-Cauchy is defined and some of its properties are investigated. This model can have flexible hazard rate shapes such as increasing, decreasing, bathtub and upside-down bathtub. The method of maximum likelihood is used to estimate the model parameters. We illustrate the flexibility of the new distribution by means of three applications to real life data sets. 相似文献
87.
《The American statistician》2012,66(4):321-326
ABSTRACTA statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach. 相似文献
88.
Ofir Harari Grace Hsu Louis Dron Jay J. H. Park Kristian Thorlund Edward J. Mills 《Pharmaceutical statistics》2021,20(2):256-271
The Bayesian paradigm provides an ideal platform to update uncertainties and carry them over into the future in the presence of data. Bayesian predictive power (BPP) reflects our belief in the eventual success of a clinical trial to meet its goals. In this paper we derive mathematical expressions for the most common types of outcomes, to make the BPP accessible to practitioners, facilitate fast computations in adaptive trial design simulations that use interim futility monitoring, and propose an organized BPP-based phase II-to-phase III design framework. 相似文献
89.
Development of predictive signatures for treatment selection in precision medicine with survival outcomes 下载免费PDF全文
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially. 相似文献
90.
Response‐adaptive designs for binary responses: How to offer patient benefit while being robust to time trends? 下载免费PDF全文
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered. 相似文献