首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
  示例: 沙坡头地区,人工植被区,变化  检索词用空格隔开表示必须包含全部检索词,用“,”隔开表示只需满足任一检索词即可!
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   578篇
  免费   50篇
  国内免费   3篇
管理学   59篇
人口学   5篇
丛书文集   11篇
理论方法论   15篇
综合类   214篇
社会学   17篇
统计学   310篇
  2025年   1篇
  2024年   8篇
  2023年   24篇
  2022年   12篇
  2021年   17篇
  2020年   23篇
  2019年   27篇
  2018年   17篇
  2017年   20篇
  2016年   19篇
  2015年   20篇
  2014年   30篇
  2013年   74篇
  2012年   58篇
  2011年   20篇
  2010年   26篇
  2009年   27篇
  2008年   21篇
  2007年   24篇
  2006年   16篇
  2005年   26篇
  2004年   7篇
  2003年   16篇
  2002年   10篇
  2001年   7篇
  2000年   11篇
  1999年   9篇
  1998年   9篇
  1997年   8篇
  1996年   7篇
  1995年   8篇
  1994年   5篇
  1993年   4篇
  1992年   5篇
  1991年   5篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1984年   1篇
排序方式: 共有631条查询结果,搜索用时 0 毫秒
11.
翻译适应选择论将达尔文生物进化论中的"自然选择"、"适者生存"学说引入翻译理论研究,凸显了译者的主体性——译者的选择性适应与适应性选择,为翻译研究开辟了一条新路径。在翻译适应选择论的指导下,以鲁迅早期译介域外科学小说为个案研究对象,指出译者在翻译过程中要充分发挥主观能动性,选择、适应特定的翻译生态环境,并依此择译合适的文本、实现"多维度适应与适应性选择转换",以提高"最佳翻译"的"整合适应选择度"。  相似文献   
12.
随着我国经济的快速发展和外部竞争的加剧,出现了组织扁平化、企业内外部变化加快和职业高原等新情形,既给中层管理者带来了新困境,也对其适应性提出了新挑战。以Plukos八维度适应性绩效模型为基础,立足国内情景,运用改进型DEMATEL方法对适应性绩效影响因素进行定量分析,得出影响中层管理者适应性绩效的关键因素,并对各因素重要性进行了排序和分析。  相似文献   
13.
构建风险视域下研发网络企业自适应行为规则,基于SIS模型构建研发网络风险传播模型,运用数值仿真的方法通过改变模型参数探索在考虑自适应行为的情况下研发网络的风险传播规律,研究结果表明:(1)C1策略增强了网络的层次性和社团强度,一定程度上抑制了研发网络中风险的传播;C2策略下节点之间新连接的建立更多是基于临近性的考量,容易陷入路径依赖和能力陷阱;(2)研发网络企业的自适应行为会导致社团强度的涨落,平均路径长度的下降以及平均聚类系数的增长充分体现出C1策略的有效性。(3)C1策略下,断边概率p与I*之间呈现\"U\"型相关关系;在C2策略下随着断边概率p的增长I*逐渐降低。(4)在C1策略和C2策略下,随着参数ζ的增长I*也随之增长,可知组织依赖水平是研发网络风险传播控制中需要重点关注的因素。本文揭示了在考虑自适应行为的情况下研发网络的风险传播规律,为网络化运作背景下研发网络治理提供理论依据。  相似文献   
14.
    
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive \"false alarm rate,\" or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   
15.
    
Covariance matrices play an important role in many multivariate techniques and hence a good covariance estimation is crucial in this kind of analysis. In many applications a sparse covariance matrix is expected due to the nature of the data or for simple interpretation. Hard thresholding, soft thresholding, and generalized thresholding were therefore developed to this end. However, these estimators do not always yield well-conditioned covariance estimates. To have sparse and well-conditioned estimates, we propose doubly shrinkage estimators: shrinking small covariances towards zero and then shrinking covariance matrix towards a diagonal matrix. Additionally, a richness index is defined to evaluate how rich a covariance matrix is. According to our simulations, the richness index serves as a good indicator to choose relevant covariance estimator.  相似文献   
16.
    
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
17.
    
In predicting a response variable using multiple linear regression model, several candidate models may be available which are subsets of the full model. Shrinkage estimators borrow information from the full model and provides a hybrid estimate of the regression parameters by shrinking the full model estimates toward the candidate submodel. The process introduces bias in the estimation but reduces the overall prediction error that offsets the bias. In this article, we give an overview of shrinkage estimators and their asymptotic properties. A real data example is given and a Monte Carlo simulation study is carried out to evaluate the performance of shrinkage estimators compared to the absolute penalty estimators such as least absolute shrinkage and selection operator (LASSO), adaptive LASSO and smoothly clipped absolute deviation (SCAD) based on prediction errors criterion in a multiple linear regression setup. WIREs Comput Stat 2012, 4:541–553. DOI: 10.1002/wics.1232 This article is categorized under:
  • Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods
  • Statistical Models > Linear Models
  相似文献   
18.
    
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   
19.
    
The problem of detecting multiple undocumented change-points in a historical temperature sequence with simple linear trend is formulated by a linear model. We apply adaptive least absolute shrinkage and selection operator (Lasso) to estimate the number and locations of change-points. Model selection criteria are used to choose the Lasso smoothing parameter. As adaptive Lasso may overestimate the number of change-points, we perform post-selection on change-points detected by adaptive Lasso using multivariate t simultaneous confidence intervals. Our method is demonstrated on the annual temperature data (year: 1902–2000) from Tuscaloosa, Alabama.  相似文献   
20.
    
In this paper, we consider tests for assessing whether two stationary and independent time series have the same spectral densities (or same autocovariance functions). Both frequency domain and time domain test statistics for this purpose are reviewed. The adaptive Neyman tests are then introduced and their performances are investigated. Our tests are adaptive, that is, they are constructed completely by the data and do not involve any unknown smoothing parameters. Simulation studies show that our proposed tests are at least comparable to the current tests in most cases. Furthermore, our tests are much more powerful in some cases, such as against the long orders of autoregressive moving average (ARMA) models such as seasonal ARMA series.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号