首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   578篇
  免费   15篇
  国内免费   1篇
管理学   40篇
人口学   15篇
丛书文集   23篇
理论方法论   5篇
综合类   83篇
社会学   29篇
统计学   399篇
  2023年   3篇
  2022年   4篇
  2021年   12篇
  2020年   14篇
  2019年   23篇
  2018年   18篇
  2017年   33篇
  2016年   11篇
  2015年   24篇
  2014年   17篇
  2013年   167篇
  2012年   50篇
  2011年   31篇
  2010年   24篇
  2009年   15篇
  2008年   17篇
  2007年   17篇
  2006年   17篇
  2005年   10篇
  2004年   16篇
  2003年   4篇
  2002年   9篇
  2001年   13篇
  2000年   5篇
  1999年   4篇
  1998年   6篇
  1997年   6篇
  1996年   1篇
  1995年   2篇
  1994年   1篇
  1993年   4篇
  1992年   5篇
  1991年   3篇
  1986年   2篇
  1985年   1篇
  1984年   1篇
  1982年   1篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有594条查询结果,搜索用时 46 毫秒
1.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   
2.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne.  相似文献   
3.
Generalized additive models for location, scale and shape   总被引:10,自引:0,他引:10  
Summary.  A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models.  相似文献   
4.
n possibly different success probabilities p 1, p 2, ..., p n is frequently approximated by a Poisson distribution with parameter λ = p 1 + p 2 + ... + p n . LeCam's bound p 2 1 + p 2 2 + ... + p n 2 for the total variation distance between both distributions is particularly useful provided the success probabilities are small. The paper presents an improved version of LeCam's bound if a generalized d-dimensional Poisson binomial distribution is to be approximated by a compound Poisson distribution. Received: May 10, 2000; revised version: January 15, 2001  相似文献   
5.
一些恒等式在函数论、组合数学、解析数论等学科的研究领域中极为重要。以二项式作为生成函数,给出了几个组合恒等式证明。  相似文献   
6.
Estimation from Zero-Failure Data   总被引:2,自引:0,他引:2  
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques.  相似文献   
7.
In this article, a system that consists of n independent components each having two dependent subcomponents (Ai, Bi), i = 1, …, n is considered. The system is assumed to compose of components that have two correlated subcomponents (Ai, Bi), and functions iff both systems of subcomponents A1, A2, …, An and B1, B2, …, Bn work under certain structural rules. The expressions for reliability and mean time to failure of such systems are obtained. A sufficient condition to compare two systems of bivariate components in terms of stochastic ordering is also presented.  相似文献   
8.
The professionalization of evaluation continues to be debated at numerous conferences in the U.S. and abroad. At this time, AEA member views on the potential benefits and negative side effects of professionalization are essential as the discussion evolves. This study provides recent views on major topics in professionalization, including potential benefits, negative side effects, processes, competencies, and procedures. Results from in-depth interviews and an online survey demonstrate that AEA members view potential benefits of professionalization to be stakeholder trust, evaluator reputation and identity, while concerns about a potential negative side effect known as the “narrowing effect” (i.e., some evaluators will be alienated based on their background, competencies, etc.) were expressed by participants. These recent findings can inform the ongoing discussion of professionalization, and suggest new directions for future research on evaluation.  相似文献   
9.
ABSTRACT

Acceptance sampling plans offered by ISO 2859-1 are far from optimal under the conditions for statistical verification in modules F and F1 as prescribed by Annex II of the Measuring Instruments Directive (MID) 2014/32/EU, resulting in sample sizes that are larger than necessary. An optimised single-sampling scheme is derived, both for large lots using the binomial distribution and for finite-sized lots using the exact hypergeometric distribution, resulting in smaller sample sizes that are economically more efficient while offering the full statistical protection required by the MID.  相似文献   
10.
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号