首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3731篇
  免费   113篇
  国内免费   21篇
管理学   124篇
民族学   5篇
人才学   1篇
人口学   13篇
丛书文集   97篇
理论方法论   28篇
综合类   1078篇
社会学   31篇
统计学   2488篇
  2024年   1篇
  2023年   14篇
  2022年   6篇
  2021年   20篇
  2020年   63篇
  2019年   81篇
  2018年   119篇
  2017年   186篇
  2016年   114篇
  2015年   111篇
  2014年   129篇
  2013年   975篇
  2012年   357篇
  2011年   173篇
  2010年   154篇
  2009年   154篇
  2008年   167篇
  2007年   150篇
  2006年   117篇
  2005年   116篇
  2004年   106篇
  2003年   80篇
  2002年   58篇
  2001年   74篇
  2000年   70篇
  1999年   49篇
  1998年   30篇
  1997年   40篇
  1996年   21篇
  1995年   20篇
  1994年   13篇
  1993年   13篇
  1992年   10篇
  1991年   8篇
  1990年   11篇
  1989年   6篇
  1988年   9篇
  1987年   6篇
  1986年   3篇
  1985年   5篇
  1984年   6篇
  1983年   6篇
  1982年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有3865条查询结果,搜索用时 15 毫秒
61.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
62.
A stable money demand function is essential when using monetary aggregate as a monetary policy. Thus, there is need to examine the stability of the money demand function in Nigeria after the deregulation of the financial sector. To achieve this, the study employed CUSUM (cumulative sum) and CUSUMSQ (CUSUM of square) tests after using autoregressive distributive lag bounds test to determine the existence of a long run relationship between monetary aggregates and their determinants. Results of the study show that a long-run relationship holds and that the demand for money is stable in Nigeria. In addition, the inflation rate is found to be a better proxy for an opportunity variable when compared to interest rate. The main implication of the study is that interest rate is ineffective as a monetary policy instrument in Nigeria.  相似文献   
63.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
64.
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application.  相似文献   
65.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   
66.
In problems related to evaluations of products or services (e.g. in customer satisfaction analysis) the main difficulties concern the synthesis of the information, which is necessary for the presence of several evaluators and many response variables (aspects under evaluation). In this article, the problem of determining and comparing the satisfaction of different groups of customers, in the presence of multivariate response variables and using the results of pairwise comparisons is addressed. Within the framework of group ranking methods and multi criteria decision making theory, a new approach, based on nonparametric techniques, for evaluating group satisfaction in a multivariate framework is proposed and the concept of Multivariate Relative Satisfaction is defined. An application to the evaluation of public transport services, like the railway service and the urban bus service, by students of the University of Ferrara (Italy) is also discussed.  相似文献   
67.
The process of using data to infer the existence of stochastic dominance is subject to sampling error. Kroll and Levy (1980), among others, have presented simulation results for several normal and lognormal distributions which show high error probabilities for a wide range of parameter values. This paper continues this line of research and uses simulation to estimate error probabilities. Distributions considered are a pair of normals and a pair of lognormals. Analysis of these distributions is made computationally feasible through theoretical results which reduce the number of parameters of the pair of distributions from four to two.  相似文献   
68.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
69.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
70.
This article considers the issue of performing tests in linear heteroskedastic models when the test statistic employs a consistent variance estimator. Several different estimators are considered, namely: HC0, HC1, HC2, HC3, and their bias-adjusted versions. The numerical evaluation is performed using numerical integration methods; the Imhof algorithm is used to that end. The results show that bias-adjustment of variance estimators used to construct test statistics delivers more reliable tests when they are performed for the HC0 and HC1 estimators, but the same does not hold for the HC3 estimator. Overall, the most reliable test is the HC3-based one.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号