首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2624篇
  免费   110篇
管理学   271篇
民族学   35篇
人口学   297篇
丛书文集   2篇
理论方法论   228篇
综合类   57篇
社会学   1319篇
统计学   525篇
  2024年   3篇
  2023年   39篇
  2022年   32篇
  2021年   43篇
  2020年   117篇
  2019年   102篇
  2018年   210篇
  2017年   233篇
  2016年   184篇
  2015年   115篇
  2014年   134篇
  2013年   479篇
  2012年   219篇
  2011年   115篇
  2010年   95篇
  2009年   81篇
  2008年   75篇
  2007年   59篇
  2006年   55篇
  2005年   44篇
  2004年   35篇
  2003年   40篇
  2002年   31篇
  2001年   24篇
  2000年   16篇
  1999年   12篇
  1998年   11篇
  1997年   11篇
  1996年   9篇
  1995年   7篇
  1994年   9篇
  1993年   5篇
  1992年   8篇
  1991年   7篇
  1990年   9篇
  1989年   8篇
  1988年   7篇
  1987年   4篇
  1986年   6篇
  1985年   4篇
  1984年   4篇
  1982年   4篇
  1981年   3篇
  1979年   4篇
  1978年   2篇
  1977年   2篇
  1975年   2篇
  1973年   4篇
  1971年   3篇
  1968年   2篇
排序方式: 共有2734条查询结果,搜索用时 15 毫秒
51.
Reliability sampling plans provide an efficient method to determine the acceptability of a product based upon the lifelengths of some test units. Usually, they depend on the producer and consumer’s quality requirements and do not admit closed form solutions. Acceptance sampling plans for one- and two-parameter exponential lifetime models, derived by approximating the operating characteristic curve, are presented in this paper. The accuracy of these approximate plans, which are explicitly expressible and valid for failure and progressive censoring, is assessed. The approximation proposed in the one-parameter case is found to be practically exact. Explicit lower and upper bounds on the smallest sample size are given in the two-parameter case. Some additional advantages are also pointed out.  相似文献   
52.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided.  相似文献   
53.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   
54.
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods.  相似文献   
55.
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature.  相似文献   
56.
Several colorectal cancer (CRC) screening models have been developed describing the progression of adenomas to CRC. Currently, there is increasing evidence that serrated lesions can also develop into CRC. It is not clear whether screening tests have the same test characteristics for serrated lesions as for adenomas, but lower sensitivities have been suggested. Models that ignore this type of colorectal lesions may provide overly optimistic predictions of the screen‐induced reduction in CRC incidence. To address this issue, we have developed the Adenoma and Serrated pathway to Colorectal CAncer (ASCCA) model that includes the adenoma‐carcinoma pathway and the serrated pathway to CRC as well as characteristics of colorectal lesions. The model structure and the calibration procedure are described in detail. Calibration resulted in 19 parameter sets for the adenoma‐carcinoma pathway and 13 for the serrated pathway that match the age‐ and sex‐specific adenoma and serrated lesion prevalence in the COlonoscopy versus COlonography Screening (COCOS) trial, Dutch CRC incidence and mortality rates, and a number of other intermediate outcomes concerning characteristics of colorectal lesions. As an example, we simulated outcomes for a biennial fecal immunochemical test screening program and a hypothetical one‐time colonoscopy screening program. Inclusion of the serrated pathway influenced the predicted effectiveness of screening when serrated lesions are associated with lower screening test sensitivity or when they are not removed. To our knowledge, this is the first model that explicitly includes the serrated pathway and characteristics of colorectal lesions. It is suitable for the evaluation of the (cost)effectiveness of potential screening strategies for CRC.  相似文献   
57.
The German Corporate Tax Reform Act of 2008 requires an adjustment of classic valuation concepts because it limits interest deduction from taxable income depending on the operating performance of the company. By using time- and state-contingent discount rates in a risk-neutral valuation with predetermined debt levels, a theoretically sound valuation result is obtained. However, a modified APV-concept which assumes deterministic debt over the planning horizon and constant leverage in the terminal value phase also yields consistent valuation results when two types of tax shields with different levels of risk are distinguished.  相似文献   
58.
Statistics and Computing - Two algorithms are proposed to simulate space-time Gaussian random fields with a covariance function belonging to an extended Gneiting class, the definition of which...  相似文献   
59.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
60.
In this paper, we define and study a new notion for the comparison of the hazard rates of two random variables taking into account their mutual dependence. Properties, applications and the comparison for a data set are given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号