首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7730篇
  免费   215篇
  国内免费   37篇
管理学   286篇
民族学   10篇
人才学   1篇
人口学   150篇
丛书文集   129篇
理论方法论   47篇
综合类   1612篇
社会学   59篇
统计学   5688篇
  2024年   3篇
  2023年   31篇
  2022年   40篇
  2021年   57篇
  2020年   127篇
  2019年   217篇
  2018年   281篇
  2017年   427篇
  2016年   226篇
  2015年   214篇
  2014年   274篇
  2013年   2119篇
  2012年   693篇
  2011年   313篇
  2010年   279篇
  2009年   292篇
  2008年   300篇
  2007年   265篇
  2006年   228篇
  2005年   208篇
  2004年   216篇
  2003年   161篇
  2002年   125篇
  2001年   147篇
  2000年   141篇
  1999年   106篇
  1998年   91篇
  1997年   79篇
  1996年   37篇
  1995年   41篇
  1994年   37篇
  1993年   28篇
  1992年   27篇
  1991年   20篇
  1990年   24篇
  1989年   18篇
  1988年   14篇
  1987年   9篇
  1986年   6篇
  1985年   8篇
  1984年   12篇
  1983年   10篇
  1982年   9篇
  1981年   1篇
  1980年   7篇
  1979年   2篇
  1978年   4篇
  1977年   2篇
  1976年   3篇
  1975年   3篇
排序方式: 共有7982条查询结果,搜索用时 515 毫秒
131.
In this paper, a competing risks model is considered under adaptive type-I progressive hybrid censoring scheme (AT-I PHCS). The lifetimes of the latent failure times have Weibull distributions with the same shape parameter. We investigate the maximum likelihood estimation of the parameters. Bayes estimates of the parameters are obtained based on squared error and LINEX loss functions under the assumption of independent gamma priors. We propose to apply Markov Chain Monte Carlo (MCMC) techniques to carry out a Bayesian estimation procedure and in turn calculate the credible intervals. To evaluate the performance of the estimators, a simulation study is carried out.  相似文献   
132.
Applying the large and moderate deviations for the log-likelihood ratio of the Rayleigh diffusion model, we give the negative regions in testing Rayleigh diffusion model and obtain the decay rates of the error probabilities.  相似文献   
133.
Logarithmic general error distribution, an extension of the log-normal distribution, is proposed. Some interesting properties of the log GED are derived. These properties are applied to establish the asymptotic behavior of the ratio of probability densities and the ratio of the tails of the logarithmic general error and log-normal distributions, and to derive the asymptotic distribution of the partial maximum of an independent and identically distributed sequence obeying the log GED.  相似文献   
134.
Factor models, structural equation models (SEMs) and random-effect models share the common feature that they assume latent or unobserved random variables. Factor models and SEMs allow well developed procedures for a rich class of covariance models with many parameters, while random-effect models allow well developed procedures for non-normal models including heavy-tailed distributions for responses and random effects. In this paper, we show how these two developments can be combined to result in an extremely rich class of models, which can be beneficial to both areas. A new fitting procedures for binary factor models and a robust estimation approach for continuous factor models are proposed.  相似文献   
135.
The additive Cox model is flexible and powerful for modelling the dynamic changes of regression coefficients in the survival analysis. This paper is concerned with feature screening for the additive Cox model with ultrahigh-dimensional covariates. The proposed screening procedure can effectively identify active predictors. That is, with probability tending to one, the selected variable set includes the actual active predictors. In order to carry out the proposed procedure, we propose an effective algorithm and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. Furthermore, we examine the finite sample performance of the proposed procedure via Monte Carlo simulations, and illustrate the proposed procedure by a real data example.  相似文献   
136.
In this paper, we propose a multiple deferred state repetitive group sampling plan which is a new sampling plan developed by incorporating the features of both multiple deferred state sampling plan and repetitive group sampling plan, for assuring Weibull or gamma distributed mean life of the products. The quality of the product is represented by the ratio of true mean life and specified mean life of the products. Two points on the operating characteristic curve approach is used to determine the optimal parameters of the proposed plan. The plan parameters are determined by formulating an optimization problem for various combinations of producer's risk and consumer's risk for both distributions. The sensitivity analysis of the proposed plan is discussed. The implementation of the proposed plan is explained using real-life data and simulated data. The proposed plan under Weibull distribution is compared with the existing sampling plans. The average sample number (ASN) of the proposed plan and failure probability of the product are obtained under Weibull, gamma and Birnbaum–Saunders distributions for a specified value of shape parameter and compared with each other. In addition, a comparative study is made between the ASN of the proposed plan under Weibull and gamma distributions.  相似文献   
137.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
138.
Measures of statistical divergence are used to assess mutual similarities between distributions of multiple variables through a variety of methodologies including Shannon entropy and Csiszar divergence. Modified measures of statistical divergence are introduced throughout the present article. Those modified measures are related to the Lin–Wong (LW) divergence applied on the past lifetime data. Accordingly, the relationship between Fisher information and the LW divergence measure was explored when applied on the past lifetime data. Throughout this study, a number of relations are proposed between various assessment methods which implement the Jensen–Shannon, Jeffreys, and Hellinger divergence measures. Also, relations between the LW measure and the Kullback–Leibler (KL) measures for past lifetime data were examined. Furthermore, the present study discusses the relationship between the proposed ordering scheme and the distance interval between LW and KL measures under certain conditions.  相似文献   
139.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
140.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号