全文获取类型
收费全文 | 2441篇 |
免费 | 75篇 |
国内免费 | 38篇 |
专业分类
管理学 | 454篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 34篇 |
理论方法论 | 4篇 |
综合类 | 696篇 |
社会学 | 10篇 |
统计学 | 1354篇 |
出版年
2024年 | 6篇 |
2023年 | 14篇 |
2022年 | 42篇 |
2021年 | 24篇 |
2020年 | 36篇 |
2019年 | 76篇 |
2018年 | 86篇 |
2017年 | 161篇 |
2016年 | 81篇 |
2015年 | 71篇 |
2014年 | 88篇 |
2013年 | 357篇 |
2012年 | 174篇 |
2011年 | 109篇 |
2010年 | 96篇 |
2009年 | 89篇 |
2008年 | 118篇 |
2007年 | 125篇 |
2006年 | 114篇 |
2005年 | 103篇 |
2004年 | 100篇 |
2003年 | 71篇 |
2002年 | 53篇 |
2001年 | 50篇 |
2000年 | 62篇 |
1999年 | 52篇 |
1998年 | 38篇 |
1997年 | 27篇 |
1996年 | 30篇 |
1995年 | 21篇 |
1994年 | 17篇 |
1993年 | 8篇 |
1992年 | 20篇 |
1991年 | 6篇 |
1990年 | 5篇 |
1989年 | 10篇 |
1988年 | 6篇 |
1987年 | 3篇 |
1985年 | 2篇 |
1984年 | 1篇 |
1981年 | 2篇 |
排序方式: 共有2554条查询结果,搜索用时 539 毫秒
71.
Marc Sobel 《统计学通讯:理论与方法》2018,47(24):5916-5933
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation. 相似文献
72.
《Journal of the Korean Statistical Society》2019,48(3):480-492
Mixtures of factor analyzers is a useful model-based clustering method which can avoid the curse of dimensionality in high-dimensional clustering. However, this approach is sensitive to both diverse non-normalities of marginal variables and outliers, which are commonly observed in multivariate experiments. We propose mixtures of Gaussian copula factor analyzers (MGCFA) for clustering high-dimensional clustering. This model has two advantages; (1) it allows different marginal distributions to facilitate fitting flexibility of the mixture model, (2) it can avoid the curse of dimensionality by embedding the factor-analytic structure in the component-correlation matrices of the mixture distribution.An EM algorithm is developed for the fitting of MGCFA. The proposed method is free of the curse of dimensionality and allows any parametric marginal distribution which fits best to the data. It is applied to both synthetic data and a microarray gene expression data for clustering and shows its better performance over several existing methods. 相似文献
73.
Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation 总被引:1,自引:0,他引:1
《Journal of Statistical Computation and Simulation》2012,82(1):195-213
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data. 相似文献
74.
《Journal of Statistical Computation and Simulation》2012,82(12):2557-2576
In recent years different approaches for the analysis of time-to-event data in the presence of competing risks, i.e. when subjects can fail from one of two or more mutually exclusive types of event, were introduced. Different approaches for the analysis of competing risks data, focusing either on cause-specific or subdistribution hazard rates, were presented in statistical literature. Many new approaches use complicated weighting techniques or resampling methods, not allowing an analytical evaluation of these methods. Simulation studies often replace analytical comparisons, since they can be performed more easily and allow investigation of non-standard scenarios. For adequate simulation studies the generation of appropriate random numbers is essential. We present an approach to generate competing risks data following flexible prespecified subdistribution hazards. Event times and types are simulated using possibly time-dependent cause-specific hazards, chosen in a way that the generated data will follow the desired subdistribution hazards or hazard ratios, respectively. 相似文献
75.
《Journal of Statistical Computation and Simulation》2012,82(12):2362-2378
ABSTRACTThe class of bivariate copulas that are invariant under truncation with respect to one variable is considered. A simulation algorithm for the members of the class and a novel construction method are presented. Moreover, inspired by a stochastic interpretation of the members of such a class, a procedure is suggested to check whether the dependence structure of a given data set is truncation invariant. The overall performance of the procedure has been illustrated on both simulated and real data. 相似文献
76.
In this article, a new algorithm for rather expensive simulation problems is presented, which consists of two phases. In the first phase, as a model-based algorithm, the simulation output is used directly in the optimization stage. In the second phase, the simulation model is replaced by a valid metamodel. In addition, a new optimization algorithm is presented. To evaluate the performance of the proposed algorithm, it is applied to the (s,S) inventory problem as well as to five test functions. Numerical results show that the proposed algorithm leads to better solutions with less computational time than the corresponding metamodel-based algorithm. 相似文献
77.
Lixin Meng 《Journal of Statistical Computation and Simulation》2017,87(1):88-99
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China. 相似文献
78.
Small area statistics obtained from sample survey data provide a critical source of information used to study health, economic, and sociological trends. However, most large-scale sample surveys are not designed for the purpose of producing small area statistics. Moreover, data disseminators are prevented from releasing public-use microdata for small geographic areas for disclosure reasons; thus, limiting the utility of the data they collect. This research evaluates a synthetic data method, intended for data disseminators, for releasing public-use microdata for small geographic areas based on complex sample survey data. The method replaces all observed survey values with synthetic (or imputed) values generated from a hierarchical Bayesian model that explicitly accounts for complex sample design features, including stratification, clustering, and sampling weights. The method is applied to restricted microdata from the National Health Interview Survey and synthetic data are generated for both sampled and non-sampled small areas. The analytic validity of the resulting small area inferences is assessed by direct comparison with the actual data, a simulation study, and a cross-validation study. 相似文献
79.
Yi Zhang 《统计学通讯:理论与方法》2018,47(2):415-426
In this article, we propose the non parametric mixture of strictly monotone regression models. For implementation, a two-step procedure is derived. We further establish the asymptotic normality of the resultant estimator and demonstrate its good performance through numerical examples. 相似文献
80.
Fengkai Yang 《统计学通讯:模拟与计算》2017,46(8):5861-5878
In this article, a non-iterative posterior sampling algorithm for linear quantile regression model based on the asymmetric Laplace distribution is proposed. The algorithm combines the inverse Bayes formulae, sampling/importance resampling, and the expectation maximization algorithm to obtain independently and identically distributed samples approximately from the observed posterior distribution, which eliminates the convergence problems in the iterative Gibbs sampling and overcomes the difficulty in evaluating the standard deviance in the EM algorithm. The numeric results in simulations and application to the classical Engel data show that the non-iterative sampling algorithm is more effective than the Gibbs sampling and EM algorithm. 相似文献