首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3779篇
  免费   111篇
  国内免费   22篇
管理学   127篇
民族学   5篇
人才学   1篇
人口学   13篇
丛书文集   97篇
理论方法论   28篇
综合类   1079篇
社会学   31篇
统计学   2531篇
  2024年   1篇
  2023年   14篇
  2022年   15篇
  2021年   21篇
  2020年   64篇
  2019年   82篇
  2018年   120篇
  2017年   188篇
  2016年   114篇
  2015年   111篇
  2014年   131篇
  2013年   983篇
  2012年   361篇
  2011年   173篇
  2010年   156篇
  2009年   156篇
  2008年   166篇
  2007年   151篇
  2006年   119篇
  2005年   120篇
  2004年   107篇
  2003年   82篇
  2002年   58篇
  2001年   74篇
  2000年   71篇
  1999年   51篇
  1998年   30篇
  1997年   42篇
  1996年   21篇
  1995年   19篇
  1994年   13篇
  1993年   14篇
  1992年   10篇
  1991年   8篇
  1990年   11篇
  1989年   6篇
  1988年   9篇
  1987年   6篇
  1986年   3篇
  1985年   5篇
  1984年   6篇
  1983年   6篇
  1982年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有3912条查询结果,搜索用时 31 毫秒
51.
In this study, we develop a test based on computational approach for the equality of variances of several normal populations. The proposed method is numerically compared with the existing methods. The numeric results demonstrate that the proposed method performs very well in terms of type I error rate and power of test. Furthermore we study the robustness of the tests by using simulation study when the underlying data are from t, exponential and uniform distributions. Finally we analyze a real dataset that motivated our study using the proposed test.  相似文献   
52.
Many studies have been used to compare the power of several goodness-of-fit (GOF) tests under simple random sampling (SRS) and ranked set sampling (RSS). In our study, a different design procedure and ranking process in RSS are thoroughly investigated. A simulation study is conducted to compare the power of the Kolmogorov–Smirnov test under SRS and RSS with different sets and cycle sizes for several distributions. Level-2 sampling design and partially rank-ordered sets are used. Also, we benefited from auxiliary variables in the ranking process. Finally, results are presented with tables and figures. Under these conditions we show that the RSS has better performance against the SRS in finite population.  相似文献   
53.
In this paper, we propose a multiple deferred state repetitive group sampling plan which is a new sampling plan developed by incorporating the features of both multiple deferred state sampling plan and repetitive group sampling plan, for assuring Weibull or gamma distributed mean life of the products. The quality of the product is represented by the ratio of true mean life and specified mean life of the products. Two points on the operating characteristic curve approach is used to determine the optimal parameters of the proposed plan. The plan parameters are determined by formulating an optimization problem for various combinations of producer's risk and consumer's risk for both distributions. The sensitivity analysis of the proposed plan is discussed. The implementation of the proposed plan is explained using real-life data and simulated data. The proposed plan under Weibull distribution is compared with the existing sampling plans. The average sample number (ASN) of the proposed plan and failure probability of the product are obtained under Weibull, gamma and Birnbaum–Saunders distributions for a specified value of shape parameter and compared with each other. In addition, a comparative study is made between the ASN of the proposed plan under Weibull and gamma distributions.  相似文献   
54.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
55.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
56.
A stable money demand function is essential when using monetary aggregate as a monetary policy. Thus, there is need to examine the stability of the money demand function in Nigeria after the deregulation of the financial sector. To achieve this, the study employed CUSUM (cumulative sum) and CUSUMSQ (CUSUM of square) tests after using autoregressive distributive lag bounds test to determine the existence of a long run relationship between monetary aggregates and their determinants. Results of the study show that a long-run relationship holds and that the demand for money is stable in Nigeria. In addition, the inflation rate is found to be a better proxy for an opportunity variable when compared to interest rate. The main implication of the study is that interest rate is ineffective as a monetary policy instrument in Nigeria.  相似文献   
57.
Estimation and Properties of a Time-Varying EGARCH(1,1) in Mean Model   总被引:1,自引:1,他引:0  
Time-varying GARCH-M models are commonly employed in econometrics and financial economics. Yet the recursive nature of the conditional variance makes likelihood analysis of these models computationally infeasible. This article outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only O(T) computational operations, where T is the sample size. Furthermore, the theoretical dynamic properties of a time-varying-parameter EGARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.  相似文献   
58.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
59.
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application.  相似文献   
60.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号