全文获取类型
收费全文 | 2634篇 |
免费 | 128篇 |
国内免费 | 9篇 |
专业分类
管理学 | 278篇 |
民族学 | 3篇 |
人口学 | 21篇 |
丛书文集 | 30篇 |
理论方法论 | 44篇 |
综合类 | 215篇 |
社会学 | 39篇 |
统计学 | 2141篇 |
出版年
2023年 | 38篇 |
2022年 | 36篇 |
2021年 | 45篇 |
2020年 | 46篇 |
2019年 | 98篇 |
2018年 | 114篇 |
2017年 | 212篇 |
2016年 | 97篇 |
2015年 | 89篇 |
2014年 | 124篇 |
2013年 | 545篇 |
2012年 | 223篇 |
2011年 | 103篇 |
2010年 | 83篇 |
2009年 | 109篇 |
2008年 | 84篇 |
2007年 | 89篇 |
2006年 | 79篇 |
2005年 | 83篇 |
2004年 | 74篇 |
2003年 | 56篇 |
2002年 | 49篇 |
2001年 | 36篇 |
2000年 | 41篇 |
1999年 | 32篇 |
1998年 | 31篇 |
1997年 | 28篇 |
1996年 | 13篇 |
1995年 | 16篇 |
1994年 | 16篇 |
1993年 | 8篇 |
1992年 | 14篇 |
1991年 | 14篇 |
1990年 | 5篇 |
1989年 | 8篇 |
1988年 | 7篇 |
1987年 | 3篇 |
1986年 | 3篇 |
1985年 | 4篇 |
1984年 | 2篇 |
1983年 | 3篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1976年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有2771条查询结果,搜索用时 609 毫秒
71.
K. C. Siju 《Journal of Statistical Computation and Simulation》2018,88(9):1717-1748
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data. 相似文献
72.
Marc Sobel 《统计学通讯:理论与方法》2018,47(24):5916-5933
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation. 相似文献
73.
Robust Bayesian nonlinear mixed‐effects modeling of time to positivity in tuberculosis trials
下载免费PDF全文
![点击此处可从《Pharmaceutical statistics》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Early phase 2 tuberculosis (TB) trials are conducted to characterize the early bactericidal activity (EBA) of anti‐TB drugs. The EBA of anti‐TB drugs has conventionally been calculated as the rate of decline in colony forming unit (CFU) count during the first 14 days of treatment. The measurement of CFU count, however, is expensive and prone to contamination. Alternatively to CFU count, time to positivity (TTP), which is a potential biomarker for long‐term efficacy of anti‐TB drugs, can be used to characterize EBA. The current Bayesian nonlinear mixed‐effects (NLME) regression model for TTP data, however, lacks robustness to gross outliers that often are present in the data. The conventional way of handling such outliers involves their identification by visual inspection and subsequent exclusion from the analysis. However, this process can be questioned because of its subjective nature. For this reason, we fitted robust versions of the Bayesian nonlinear mixed‐effects regression model to a wide range of TTP datasets. The performance of the explored models was assessed through model comparison statistics and a simulation study. We conclude that fitting a robust model to TTP data obviates the need for explicit identification and subsequent “deletion” of outliers but ensures that gross outliers exert no undue influence on model fits. We recommend that the current practice of fitting conventional normal theory models be abandoned in favor of fitting robust models to TTP data. 相似文献
74.
Tai VoVan 《统计学通讯:理论与方法》2018,47(8):1792-1811
In this article, we propose a new criterion to evaluate the similarity of probability density functions (pdfs). We call this the criterion on similar coefficient of cluster (SCC) and use it as a tool to deal with overlap coefficients of pdfs in normal standard on [0;1]. With the support of the self-update algorithm for determining the suitable number of clusters, SCC then becomes a criterion to establish the corresponding cluster for pdfs. Moreover, some results on determination of SCC in case of two and more than two pdfs as well as relations of different SCCs and other measures are presented. The numerical examples in both synthetic data and real data are given not only to illustrate the suitability of proposed theories and algorithms but also to demonstrate the applicability and innovation of the proposed algorithm. 相似文献
75.
Campylobacter QMRA: A Bayesian Estimation of Prevalence and Concentration in Retail Foods Under Clustering and Heavy Censoring
下载免费PDF全文
![点击此处可从《Risk analysis》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Antti Mikkelä Jukka Ranta Manuel González Marjaana Hakkinen Pirkko Tuominen 《Risk analysis》2016,36(11):2065-2080
A Bayesian statistical temporal‐prevalence‐concentration model (TPCM) was built to assess the prevalence and concentration of pathogenic campylobacter species in batches of fresh chicken and turkey meat at retail. The data set was collected from Finnish grocery stores in all the seasons of the year. Observations at low concentration levels are often censored due to the limit of determination of the microbiological methods. This model utilized the potential of Bayesian methods to borrow strength from related samples in order to perform under heavy censoring. In this extreme case the majority of the observed batch‐specific concentrations was below the limit of determination. The hierarchical structure was included in the model in order to take into account the within‐batch and between‐batch variability, which may have a significant impact on the sample outcome depending on the sampling plan. Temporal changes in the prevalence of campylobacter were modeled using a Markovian time series. The proposed model is adaptable for other pathogens if the same type of data set is available. The computation of the model was performed using OpenBUGS software. 相似文献
76.
《Journal of Statistical Computation and Simulation》2012,82(9):1785-1797
Density estimation for pre-binned data is challenging due to the loss of exact position information of the original observations. Traditional kernel density estimation methods cannot be applied when data are pre-binned in unequally spaced bins or when one or more bins are semi-infinite intervals. We propose a novel density estimation approach using the generalized lambda distribution (GLD) for data that have been pre-binned over a sequence of consecutive bins. This method enjoys the high power of the parametric model and the great shape flexibility of the GLD. The performances of the proposed estimators are benchmarked via simulation studies. Both simulation results and a real data application show that the proposed density estimators work well for data of moderate or large sizes. 相似文献
77.
Lixin Meng 《Journal of Statistical Computation and Simulation》2017,87(1):88-99
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China. 相似文献
78.
We develop a novel computational methodology for Bayesian optimal sequential design for nonparametric regression. This computational methodology, that we call inhomogeneous evolutionary Markov chain Monte Carlo, combines ideas of simulated annealing, genetic or evolutionary algorithms, and Markov chain Monte Carlo. Our framework allows optimality criteria with general utility functions and general classes of priors for the underlying regression function. We illustrate the usefulness of our novel methodology with applications to experimental design for nonparametric function estimation using Gaussian process priors and free-knot cubic splines priors. 相似文献
79.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study. 相似文献
80.
Björn Bornkamp David Ohlssen Baldur P. Magnusson Heinz Schmidli 《Pharmaceutical statistics》2017,16(2):133-142
In many clinical trials, biological, pharmacological, or clinical information is used to define candidate subgroups of patients that might have a differential treatment effect. Once the trial results are available, interest will focus on subgroups with an increased treatment effect. Estimating a treatment effect for these groups, together with an adequate uncertainty statement is challenging, owing to the resulting “random high” / selection bias. In this paper, we will investigate Bayesian model averaging to address this problem. The general motivation for the use of model averaging is to realize that subgroup selection can be viewed as model selection, so that methods to deal with model selection uncertainty, such as model averaging, can be used also in this setting. Simulations are used to evaluate the performance of the proposed approach. We illustrate it on an example early‐phase clinical trial. 相似文献