首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   343篇
  免费   7篇
管理学   32篇
丛书文集   2篇
理论方法论   2篇
综合类   18篇
社会学   7篇
统计学   289篇
  2023年   2篇
  2022年   3篇
  2021年   2篇
  2020年   7篇
  2019年   17篇
  2018年   26篇
  2017年   26篇
  2016年   20篇
  2015年   8篇
  2014年   11篇
  2013年   79篇
  2012年   25篇
  2011年   8篇
  2010年   15篇
  2009年   17篇
  2008年   16篇
  2007年   11篇
  2006年   9篇
  2005年   10篇
  2004年   10篇
  2003年   3篇
  2002年   2篇
  2001年   3篇
  2000年   5篇
  1999年   2篇
  1998年   2篇
  1997年   1篇
  1994年   2篇
  1993年   1篇
  1992年   2篇
  1990年   2篇
  1983年   2篇
  1981年   1篇
排序方式: 共有350条查询结果,搜索用时 15 毫秒
21.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   
22.
23.
Abstract.  This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proportional model is, however, that there is no simple likelihood to work with. We here study a least squares criterion with desirable properties and show how this criterion can be interpreted as a prediction error. Given this criterion, we define ridge and Lasso estimators as well as an adaptive Lasso and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare the Dantzig and adaptive Lasso for a moderate to small number of covariates. The methods are applied to a breast cancer data set with gene expression recordings and to the primary biliary cirrhosis clinical data.  相似文献   
24.
通过文献分析和定量分析,认为在小城镇的建设进程中,应以产业发展为支撑,尤其应注重产业的聚集问题研究。应用复杂适应系统理论基于StarLogo平台对西部小城镇的产业集聚进行建模与仿真,通过对产业集聚障碍成因的剖析明晰西部小城镇建设的进程和路径,并提出相应的政策建议。  相似文献   
25.
Typically, parametric approaches to spatial problems require restrictive assumptions. On the other hand, in a wide variety of practical situations nonparametric bivariate smoothing techniques has been shown to be successfully employable for estimating small or large scale regularity factors, or even the signal content of spatial data taken as a whole.We propose a weighted local polynomial regression smoother suitable for fitting of spatial data. To account for spatial variability, we both insert a spatial contiguity index in the standard formulation, and construct a spatial-adaptive bandwidth selection rule. Our bandwidth selector depends on the Gearys local indicator of spatial association. As illustrative example, we provide a brief Monte Carlo study case on equally spaced data, the performances of our smoother and the standard polynomial regression procedure are compared.This note, though it is the result of a close collaboration, was specifically elaborated as follows: paragraphs 1 and 2 by T. Sclocco and the remainder by M. Di Marzio. The authors are grateful to the referees for constructive comments and suggestions.  相似文献   
26.
We consider the situation where one wants to maximise a functionf(θ,x) with respect tox, with θ unknown and estimated from observationsy k . This may correspond to the case of a regression model, where one observesy k =f(θ,x k )+ε k , with ε k some random error, or to the Bernoulli case wherey k ∈{0, 1}, with Pr[y k =1|θ,x k |=f(θ,x k ). Special attention is given to sequences given by , with an estimated value of θ obtained from (x1, y1),...,(x k ,y k ) andd k (x) a penalty for poor estimation. Approximately optimal rules are suggested in the linear regression case with a finite horizon, where one wants to maximize ∑ i=1 N w i f(θ, x i ) with {w i } a weighting sequence. Various examples are presented, with a comparison with a Polya urn design and an up-and-down method for a binary response problem.  相似文献   
27.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   
28.
A tutorial on adaptive MCMC   总被引:1,自引:0,他引:1  
We review adaptive Markov chain Monte Carlo algorithms (MCMC) as a mean to optimise their performance. Using simple toy examples we review their theoretical underpinnings, and in particular show why adaptive MCMC algorithms might fail when some fundamental properties are not satisfied. This leads to guidelines concerning the design of correct algorithms. We then review criteria and the useful framework of stochastic approximation, which allows one to systematically optimise generally used criteria, but also analyse the properties of adaptive MCMC algorithms. We then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice. These algorithms are applied to artificial and high dimensional scenarios, but also to the classic mine disaster dataset inference problem.  相似文献   
29.
In high-dimensional setting, componentwise L2boosting has been used to construct sparse model that performs well, but it tends to select many ineffective variables. Several sparse boosting methods, such as, SparseL2Boosting and Twin Boosting, have been proposed to improve the variable selection of L2boosting algorithm. In this article, we propose a new general sparse boosting method (GSBoosting). The relations are established between GSBoosting and other well known regularized variable selection methods in the orthogonal linear model, such as adaptive Lasso, hard thresholds, etc. Simulation results show that GSBoosting has good performance in both prediction and variable selection.  相似文献   
30.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号