全文获取类型
收费全文 | 500篇 |
免费 | 10篇 |
国内免费 | 2篇 |
专业分类
管理学 | 17篇 |
人口学 | 6篇 |
丛书文集 | 2篇 |
理论方法论 | 1篇 |
综合类 | 30篇 |
统计学 | 456篇 |
出版年
2022年 | 3篇 |
2021年 | 5篇 |
2020年 | 15篇 |
2019年 | 18篇 |
2018年 | 16篇 |
2017年 | 39篇 |
2016年 | 12篇 |
2015年 | 19篇 |
2014年 | 11篇 |
2013年 | 173篇 |
2012年 | 41篇 |
2011年 | 19篇 |
2010年 | 15篇 |
2009年 | 10篇 |
2008年 | 13篇 |
2007年 | 11篇 |
2006年 | 8篇 |
2005年 | 8篇 |
2004年 | 12篇 |
2003年 | 4篇 |
2002年 | 3篇 |
2001年 | 8篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 4篇 |
1997年 | 5篇 |
1995年 | 3篇 |
1994年 | 1篇 |
1993年 | 5篇 |
1992年 | 4篇 |
1991年 | 3篇 |
1990年 | 3篇 |
1989年 | 2篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1982年 | 1篇 |
1977年 | 2篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有512条查询结果,搜索用时 15 毫秒
1.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size. 相似文献
2.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne. 相似文献
3.
Generalized additive models for location, scale and shape 总被引:10,自引:0,他引:10
R. A. Rigby D. M. Stasinopoulos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):507-554
Summary. A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. 相似文献
4.
Michael Weba 《Statistical Papers》2002,43(3):445-452
n possibly different success probabilities p
1, p
2, ..., p
n
is frequently approximated by a Poisson distribution with parameter λ = p
1 + p
2 + ... + p
n
. LeCam's bound p
2
1 + p
2
2 + ... + p
n
2 for the total variation distance between both distributions is particularly useful provided the success probabilities are
small.
The paper presents an improved version of LeCam's bound if a generalized d-dimensional Poisson binomial distribution is to be approximated by a compound Poisson distribution.
Received: May 10, 2000; revised version: January 15, 2001 相似文献
5.
一些恒等式在函数论、组合数学、解析数论等学科的研究领域中极为重要。以二项式作为生成函数,给出了几个组合恒等式证明。 相似文献
6.
Estimation from Zero-Failure Data 总被引:2,自引:0,他引:2
Robert T. Bailey 《Risk analysis》1997,17(3):375-380
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques. 相似文献
7.
Serkan Eryilmaz 《统计学通讯:模拟与计算》2017,46(10):8005-8013
In this article, a system that consists of n independent components each having two dependent subcomponents (Ai, Bi), i = 1, …, n is considered. The system is assumed to compose of components that have two correlated subcomponents (Ai, Bi), and functions iff both systems of subcomponents A1, A2, …, An and B1, B2, …, Bn work under certain structural rules. The expressions for reliability and mean time to failure of such systems are obtained. A sufficient condition to compare two systems of bivariate components in terms of stochastic ordering is also presented. 相似文献
8.
《Journal of Statistical Computation and Simulation》2012,82(18):3608-3619
ABSTRACTQuite an important problem usually occurs in several multi-dimensional hypotheses testing problems when variables are correlated. In this framework the non-parametric combination (NPC) of a finite number of dependent permutation tests is suitable to cover almost all real situations of practical interest since the dependence relations among partial tests are implicitly captured by the combining procedure itself without the need to specify them [Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010a]. An open problem related to NPC-based tests is the impact of the dependency structure on combined tests, especially in the presence of categorical variables. This paper’s goal is firstly to investigate the impact of the dependency structure on the possible significance of combined tests in cases of ordered categorical responses using Monte Carlo simulations, then to propose some specific procedures aimed at improving the power of multivariate combination-based permutation tests. The results show that an increasing level of correlation/association among responses negatively affects the power of combination-based multivariate permutation tests. The application of special forms of combination functions based on the truncated product method [Zaykin DV, Zhivotovsky LA, Westfall PH, Weir BS. Truncated product method for combining p-values. Genet Epidemiol. 2002;22:170–185; Dudbridge F, Koeleman BPC. Rank truncated product of p-values, with application to genomewide association scans. Genet Epidemiol. 2003;25:360–366] or on Liptak combination allowed us, using Monte Carlo simulations, to demonstrate the possibility of mitigating the negative effect on power of combination-based multivariate permutation tests produced by an increasing level of correlation/association among responses. 相似文献
9.
Cord A. Müller 《Journal of applied statistics》2019,46(13):2338-2356
ABSTRACTAcceptance sampling plans offered by ISO 2859-1 are far from optimal under the conditions for statistical verification in modules F and F1 as prescribed by Annex II of the Measuring Instruments Directive (MID) 2014/32/EU, resulting in sample sizes that are larger than necessary. An optimised single-sampling scheme is derived, both for large lots using the binomial distribution and for finite-sized lots using the exact hypergeometric distribution, resulting in smaller sample sizes that are economically more efficient while offering the full statistical protection required by the MID. 相似文献
10.
《Journal of Statistical Computation and Simulation》2012,82(12):2557-2576
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively. 相似文献