首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5815篇
  免费   276篇
  国内免费   111篇
管理学   385篇
民族学   13篇
人才学   1篇
人口学   112篇
丛书文集   237篇
理论方法论   127篇
综合类   1454篇
社会学   337篇
统计学   3536篇
  2024年   14篇
  2023年   86篇
  2022年   112篇
  2021年   139篇
  2020年   169篇
  2019年   260篇
  2018年   301篇
  2017年   378篇
  2016年   264篇
  2015年   222篇
  2014年   301篇
  2013年   1031篇
  2012年   391篇
  2011年   249篇
  2010年   214篇
  2009年   215篇
  2008年   231篇
  2007年   235篇
  2006年   203篇
  2005年   206篇
  2004年   176篇
  2003年   139篇
  2002年   100篇
  2001年   110篇
  2000年   96篇
  1999年   65篇
  1998年   66篇
  1997年   44篇
  1996年   27篇
  1995年   31篇
  1994年   25篇
  1993年   16篇
  1992年   19篇
  1991年   11篇
  1990年   11篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6202条查询结果,搜索用时 15 毫秒
171.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
172.
The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
173.
本文主要通过改进的TF-IDF算法和多元词组动态构建来选择特征关键词,并利用CluStream数据流聚类方法,实现文本主题的动态发现.实验表明,该方法可以较好地发现海量文本信息中不断变化的主题信息,从而达到推荐关联主题、动态监测舆情等目的.  相似文献   
174.
简要介绍了盘形钻头的结构特点及工作机理,着重对盘形牙轮钻头9种不同结构参数的盘形切削齿与目前镶齿牙轮钻头上使用的瓢形和楔形齿在重庆砂岩上进行了刮切对比实验的实验数据进行了分析处理。求出了刮切实验中测得的不同牙齿在刮切过程中的刮切力的最大值、最小值、平均值和刮切单位体积岩石所需的刮切力的大小,得到了不同结构牙齿在刮切重庆砂岩时的刮切效果,为进一步设计出实用于中硬及软地层并具有较高综合工作性能的盘形钻头提供了实验数据。  相似文献   
175.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   
176.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
177.
We investigate empirical likelihood for the additive hazards model with current status data. An empirical log-likelihood ratio for a vector or subvector of regression parameters is defined and its limiting distribution is shown to be a standard chi-squared distribution. The proposed inference procedure enables us to make empirical likelihood-based inference for the regression parameters. Finite sample performance of the proposed method is assessed in simulation studies to compare with that of a normal approximation method, it shows that the empirical likelihood method provides more accurate inference than the normal approximation method. A real data example is used for illustration.  相似文献   
178.
This article investigates the choice of working covariance structures in the analysis of spatially correlated observations motivated by cardiac imaging data. Through Monte Carlo simulations, we found that the choice of covariance structure affects the efficiency of the estimator and power of the test. Choosing the popular unstructured working covariance structure results in an over-inflated Type I error possibly due to a sample size not large enough relative to the number of parameters being estimated. With regard to model fit indices, Bayesian Information Criterion outperforms Akaike Information Criterion in choosing the correct covariance structure used to generate data.  相似文献   
179.
A practical problem with large-scale survey data is the possible presence of overdispersion. It occurs when the data display more variability than is predicted by the variance–mean relationship. This article describes a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion. Specifically, several tests for detecting overdispersion will be implemented on the basis of the asymptotic theory for maximum likelihood estimators. We discuss the results of a simulation experiment concerning log-likelihood ratio, Wald, Score, and Profile tests. Finally, some real datasets are analyzed to illustrate the previous results.  相似文献   
180.
Computer models with functional output are omnipresent throughout science and engineering. Most often the computer model is treated as a black-box and information about the underlying mathematical model is not exploited in statistical analyses. Consequently, general-purpose bases such as wavelets are typically used to describe the main characteristics of the functional output. In this article we advocate for using information about the underlying mathematical model in order to choose a better basis for the functional output. To validate this choice, a simulation study is presented in the context of uncertainty analysis for a computer model from inverse Sturm-Liouville problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号