首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   664篇
  免费   51篇
  国内免费   6篇
管理学   56篇
民族学   4篇
人口学   10篇
丛书文集   27篇
理论方法论   18篇
综合类   296篇
社会学   22篇
统计学   288篇
  2024年   3篇
  2023年   23篇
  2022年   11篇
  2021年   16篇
  2020年   20篇
  2019年   25篇
  2018年   17篇
  2017年   20篇
  2016年   26篇
  2015年   26篇
  2014年   40篇
  2013年   82篇
  2012年   55篇
  2011年   47篇
  2010年   39篇
  2009年   34篇
  2008年   32篇
  2007年   28篇
  2006年   21篇
  2005年   29篇
  2004年   8篇
  2003年   17篇
  2002年   12篇
  2001年   7篇
  2000年   11篇
  1999年   9篇
  1998年   10篇
  1997年   8篇
  1996年   7篇
  1995年   8篇
  1994年   5篇
  1993年   4篇
  1992年   5篇
  1991年   6篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1984年   1篇
排序方式: 共有721条查询结果,搜索用时 15 毫秒
11.
In the non-conjugate Gibbs sampler, the required sampling from the full conditional densities needs the adoption of black-box sampling methods. Recent suggestions include rejection sampling, adaptive rejection sampling, generalized ratio of uniforms, and the Griddy-Gibbs sampler. This paper describes a general idea based on variate transformations which can be tailored in all the above methods and increase the Gibbs sampler efficiency. Moreover, a simple technique to assess convergence is suggested and illustrative examples are presented.  相似文献   
12.
This article employs Agent-Based Computational Economics (ACE) to investigate whether, and under what conditions, trust is viable in markets. The emergence and breakdown of trust is modeled in a context of multiple buyers and suppliers. Agents develop trust in a partner as a function of observed loyalty. They select partners on the basis of their trust in the partner and potential profit, with adaptive weights. On the basis of realized profits, they adapt the weight they attach to trust relative to profitability, and their own trustworthiness, modeled as a threshold of defection. Trust and loyalty turn out to be viable under fairly general conditions.  相似文献   
13.
We apply the Abramson principle to define adaptive kernel estimators for the intensity function of a spatial point process. We derive asymptotic expansions for the bias and variance under the regime that n independent copies of a simple point process in Euclidean space are superposed. The method is illustrated by means of a simple example and applied to tornado data.  相似文献   
14.
In this paper, semiparametric methods are applied to estimate multivariate volatility functions, using a residual approach as in [J. Fan and Q. Yao, Efficient estimation of conditional variance functions in stochastic regression, Biometrika 85 (1998), pp. 645–660; F.A. Ziegelmann, Nonparametric estimation of volatility functions: The local exponential estimator, Econometric Theory 18 (2002), pp. 985–991; F.A. Ziegelmann, A local linear least-absolute-deviations estimator of volatility, Comm. Statist. Simulation Comput. 37 (2008), pp. 1543–1564], among others. Our main goal here is two-fold: (1) describe and implement a number of semiparametric models, such as additive, single-index and (adaptive) functional-coefficient, in volatility estimation, all motivated as alternatives to deal with the curse of dimensionality present in fully nonparametric models; and (2) propose the use of a variation of the traditional cross-validation method to deal with model choice in the class of adaptive functional-coefficient models, choosing simultaneously the bandwidth, the number of covariates in the model and also the single-index smoothing variable. The modified cross-validation algorithm is able to tackle the computational burden caused by the model complexity, providing an important tool in semiparametric volatility estimation. We briefly discuss model identifiability when estimating volatility as well as nonnegativity of the resulting estimators. Furthermore, Monte Carlo simulations for several underlying generating models are implemented and applications to real data are provided.  相似文献   
15.
In this paper, we propose a new full iteration estimation method for quantile regression (QR) of the single-index model (SIM). The asymptotic properties of the proposed estimator are derived. Furthermore, we propose a variable selection procedure for the QR of SIM by combining the estimation method with the adaptive LASSO penalized method to get sparse estimation of the index parameter. The oracle properties of the variable selection method are established. Simulations with various non-normal errors are conducted to demonstrate the finite sample performance of the estimation method and the variable selection procedure. Furthermore, we illustrate the proposed method by analyzing a real data set.  相似文献   
16.
从基本的菲利普斯曲线理论出发,提出了新的适应性预期模型,并以此为基础建立附加预期的菲利普斯曲线方程,进而估计出我国近年来的自然失业率水平,由此证明了菲利普斯曲线在我国的有效性,并描述了预期因素对实际通货膨胀水平的影响。最后结合实证分析结果提出解决我国通货膨胀和失业问题的建议,如保证货币政策的连贯性、加强信息披露、解决结构性失业问题等。  相似文献   
17.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
18.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
19.
Covariance matrices play an important role in many multivariate techniques and hence a good covariance estimation is crucial in this kind of analysis. In many applications a sparse covariance matrix is expected due to the nature of the data or for simple interpretation. Hard thresholding, soft thresholding, and generalized thresholding were therefore developed to this end. However, these estimators do not always yield well-conditioned covariance estimates. To have sparse and well-conditioned estimates, we propose doubly shrinkage estimators: shrinking small covariances towards zero and then shrinking covariance matrix towards a diagonal matrix. Additionally, a richness index is defined to evaluate how rich a covariance matrix is. According to our simulations, the richness index serves as a good indicator to choose relevant covariance estimator.  相似文献   
20.
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号