首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4111篇
  免费   116篇
  国内免费   21篇
管理学   136篇
民族学   5篇
人才学   1篇
人口学   16篇
丛书文集   99篇
理论方法论   39篇
综合类   1085篇
社会学   45篇
统计学   2822篇
  2024年   1篇
  2023年   15篇
  2022年   19篇
  2021年   24篇
  2020年   65篇
  2019年   97篇
  2018年   138篇
  2017年   211篇
  2016年   121篇
  2015年   120篇
  2014年   141篇
  2013年   1099篇
  2012年   377篇
  2011年   185篇
  2010年   167篇
  2009年   168篇
  2008年   175篇
  2007年   159篇
  2006年   124篇
  2005年   124篇
  2004年   118篇
  2003年   89篇
  2002年   59篇
  2001年   79篇
  2000年   74篇
  1999年   52篇
  1998年   33篇
  1997年   44篇
  1996年   25篇
  1995年   21篇
  1994年   15篇
  1993年   16篇
  1992年   11篇
  1991年   10篇
  1990年   11篇
  1989年   6篇
  1988年   9篇
  1987年   6篇
  1986年   3篇
  1985年   6篇
  1984年   7篇
  1983年   6篇
  1982年   7篇
  1980年   3篇
  1979年   1篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有4248条查询结果,搜索用时 546 毫秒
61.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
62.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
63.
A stable money demand function is essential when using monetary aggregate as a monetary policy. Thus, there is need to examine the stability of the money demand function in Nigeria after the deregulation of the financial sector. To achieve this, the study employed CUSUM (cumulative sum) and CUSUMSQ (CUSUM of square) tests after using autoregressive distributive lag bounds test to determine the existence of a long run relationship between monetary aggregates and their determinants. Results of the study show that a long-run relationship holds and that the demand for money is stable in Nigeria. In addition, the inflation rate is found to be a better proxy for an opportunity variable when compared to interest rate. The main implication of the study is that interest rate is ineffective as a monetary policy instrument in Nigeria.  相似文献   
64.
We consider the estimation of the conditional hazard function of a scalar response variable Y given a Hilbertian random variable X when the observations are linked via a single-index structure in the quasi-associated framework. We establish the pointwise almost complete convergence and the uniform almost complete convergence (with the rate) of the estimate of this model. A simulation is given to illustrate the good behavior in the practice of our methodology.  相似文献   
65.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
66.
Tests for equality of variances using independent samples are widely used in data analysis. Conover et al. [A comparative study of tests for homogeneity of variance, with applications to the outer continental shelf bidding data. Technometrics. 1981;23:351–361], won the Youden Prize by comparing 56 variations of popular tests for variance on the basis of robustness and power in 60 different scenarios. None of the tests they compared were robust and powerful for the skewed distributions they considered. This study looks at 12 variations they did not consider, and shows that 10 are robust for the skewed distributions they considered plus the lognormal distribution, which they did not study. Three of these 12 have clearly superior power for skewed distributions, and are competitive in terms of robustness and power for all of the distributions considered. They are recommended for general use based on robustness, power, and ease of application.  相似文献   
67.
In late-phase confirmatory clinical trials in the oncology field, time-to-event (TTE) endpoints are commonly used as primary endpoints for establishing the efficacy of investigational therapies. Among these TTE endpoints, overall survival (OS) is always considered as the gold standard. However, OS data can take years to mature, and its use for measurement of efficacy can be confounded by the use of post-treatment rescue therapies or supportive care. Therefore, to accelerate the development process and better characterize the treatment effect of new investigational therapies, other TTE endpoints such as progression-free survival and event-free survival (EFS) are applied as primary efficacy endpoints in some confirmatory trials, either as a surrogate for OS or as a direct measure of clinical benefits. For evaluating novel treatments for acute myeloid leukemia, EFS has been gradually recognized as a direct measure of clinical benefits. However, the application of an EFS endpoint is still controversial mainly due to the debate surrounding definition of treatment failure (TF) events. In this article, we investigate the EFS endpoint with the most conservative definition for the timing of TF, which is Day 1 since randomization. Specifically, the corresponding non-proportional hazard pattern of the EFS endpoint is investigated with both analytical and numerical approaches.  相似文献   
68.
In problems related to evaluations of products or services (e.g. in customer satisfaction analysis) the main difficulties concern the synthesis of the information, which is necessary for the presence of several evaluators and many response variables (aspects under evaluation). In this article, the problem of determining and comparing the satisfaction of different groups of customers, in the presence of multivariate response variables and using the results of pairwise comparisons is addressed. Within the framework of group ranking methods and multi criteria decision making theory, a new approach, based on nonparametric techniques, for evaluating group satisfaction in a multivariate framework is proposed and the concept of Multivariate Relative Satisfaction is defined. An application to the evaluation of public transport services, like the railway service and the urban bus service, by students of the University of Ferrara (Italy) is also discussed.  相似文献   
69.
The process of using data to infer the existence of stochastic dominance is subject to sampling error. Kroll and Levy (1980), among others, have presented simulation results for several normal and lognormal distributions which show high error probabilities for a wide range of parameter values. This paper continues this line of research and uses simulation to estimate error probabilities. Distributions considered are a pair of normals and a pair of lognormals. Analysis of these distributions is made computationally feasible through theoretical results which reduce the number of parameters of the pair of distributions from four to two.  相似文献   
70.
It has been known that when there is a break in the variance (unconditional heteroskedasticity) of the error term in linear regression models, a routine application of the Lagrange multiplier (LM) test for autocorrelation can cause potentially significant size distortions. We propose a new test for autocorrelation that is robust in the presence of a break in variance. The proposed test is a modified LM test based on a generalized least squares regression. Monte Carlo simulations show that the new test performs well in finite samples and it is especially comparable to other existing heteroskedasticity-robust tests in terms of size, and much better in terms of power.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号