首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   241篇
  免费   6篇
管理学   20篇
人口学   3篇
丛书文集   1篇
理论方法论   6篇
综合类   44篇
社会学   1篇
统计学   172篇
  2022年   1篇
  2021年   3篇
  2020年   11篇
  2019年   9篇
  2018年   8篇
  2017年   12篇
  2016年   14篇
  2015年   6篇
  2014年   12篇
  2013年   29篇
  2012年   17篇
  2011年   6篇
  2010年   4篇
  2009年   8篇
  2008年   10篇
  2007年   8篇
  2006年   6篇
  2005年   9篇
  2004年   5篇
  2003年   4篇
  2002年   10篇
  2001年   4篇
  2000年   6篇
  1999年   4篇
  1998年   13篇
  1997年   5篇
  1996年   3篇
  1995年   5篇
  1994年   4篇
  1993年   3篇
  1992年   6篇
  1989年   1篇
  1986年   1篇
排序方式: 共有247条查询结果,搜索用时 15 毫秒
21.
Abstract.  This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proportional model is, however, that there is no simple likelihood to work with. We here study a least squares criterion with desirable properties and show how this criterion can be interpreted as a prediction error. Given this criterion, we define ridge and Lasso estimators as well as an adaptive Lasso and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare the Dantzig and adaptive Lasso for a moderate to small number of covariates. The methods are applied to a breast cancer data set with gene expression recordings and to the primary biliary cirrhosis clinical data.  相似文献   
22.
Abstract

This paper extends operations strategy theory on efficiency and flexibility trade-offs to the emergent phenomenon of redistributed manufacturing (RDM). The study adopts a multiple-case design including five small and five large pharmaceutical firms. We propose that organizations can gain the efficiency benefits of centralized manufacturing and the flexibility advantages of RDM by building an ambidexterity capability. To build such a capability, large firms can structurally partition their manufacturing and supply management functions, with one sub-unit managing centralized production and the other RDM. Smaller enterprises can build an ambidexterity capability by creating the right organizational context, where multi-skilled workers switch between efficient and flexible tasks. This paper contributes to theory by explaining the emergence of RDM using an organizational ambidexterity lens, laying the groundwork for new theory development in the field. We provide managers with a practical example of how to build an ambidexterity capability to realize flexibility and efficiency advantages.  相似文献   
23.
GARCH model has been commonly used to describe the volatility of foreign exchange returns, which typically depends on returns many lags before, While the GARCH model provides a simple geometric decaying structure for persistence in time, it restricts tiie impact of variables to Quadratic functions. A finite nonparametric GARCH model is proposed that allows the variables' impact to be a smooth function of any form. A direct local polynomial estimation method for this finite GARCH model is proposed based on results on proportional additive model, and is applied to the German Mark (DEM)/US Dollar (USD) daily returns data. Estimators uf both the decaying rate and the impact function are obtained. Diagnostics show satisfactory out-of-sampie prediction based on the proposed model, which helps to better understand the dynamics of foreign exchange volatility.  相似文献   
24.
Based on B-spline basis functions and smoothly clipped absolute deviation (SCAD) penalty, we present a new estimation and variable selection procedure based on modal regression for partially linear additive models. The outstanding merit of the new method is that it is robust against outliers or heavy-tail error distributions and performs no worse than the least-square-based estimation for normal error case. The main difference is that the standard quadratic loss is replaced by a kernel function depending on a bandwidth that can be automatically selected based on the observed data. With appropriate selection of the regularization parameters, the new method possesses the consistency in variable selection and oracle property in estimation. Finally, both simulation study and real data analysis are performed to examine the performance of our approach.  相似文献   
25.
This paper describes a technique for computing approximate maximum pseudolikelihood estimates of the parameters of a spatial point process. The method is an extension of Berman & Turner's (1992) device for maximizing the likelihoods of inhomogeneous spatial Poisson processes. For a very wide class of spatial point process models the likelihood is intractable, while the pseudolikelihood is known explicitly, except for the computation of an integral over the sampling region. Approximation of this integral by a finite sum in a special way yields an approximate pseudolikelihood which is formally equivalent to the (weighted) likelihood of a loglinear model with Poisson responses. This can be maximized using standard statistical software for generalized linear or additive models, provided the conditional intensity of the process takes an 'exponential family' form. Using this approach a wide variety of spatial point process models of Gibbs type can be fitted rapidly, incorporating spatial trends, interaction between points, dependence on spatial covariates, and mark information.  相似文献   
26.
以62个国家作为样本研究对象,以半参数的可加模型作为研究方法,分别从宏观经济层面、产业制度层面、产业结构层面和文化层面四个角度分析了各层次因素对银行业效率的影响。实证表明:适度的混业经营、良好的信用制度、较强的公众监督能力、资本市场的茁壮发展以及稳定的宏观经济环境能有效提高银行业的效率。而通过对银行规模的分析,也证实了银行业的确存在规模经济现象。  相似文献   
27.
In this paper, we consider using a semiparametric regression approach to modelling non-linear autoregressive time series. Based on a finite series approximation to non-parametric components, an adaptive selection procedure for the number of summands in the series approximation is proposed. Meanwhile, a large sample study is detailed and a small sample simulation for the Mackey–Glass system is presented to support the large sample study.  相似文献   
28.
In light of the Armitage-Doll multistage carcinogenesis theory, this paper examines the assumption that an additive relative risk relationship is indicative of two carcinogens that affect the same stage in the cancer process. We present formulas to compute excess cancer risks for a variety of patterns for limited exposure durations to two carcinogens that affect the first and penultimate stages; and using an index of synergy proposed by Thomas (1982), we find a number of these patterns to produce additive, or nearly additive, relative risk relationships. The consistent feature of these patterns is that the two exposure periods are of short duration and occur close together.  相似文献   
29.
Two bootstrap procedures are introduced into the hybrid of the backfitting algorithm and the Cochrane–Orcutt procedure in the estimation of a spatial-temporal model. The use of time blocks of consecutive observations in resampling steps proved to be optimal in terms of stability and efficiency of estimates. Between iterations, there were minimal changes in the empirical distributions of the parameter estimates associated with the covariate and temporal effects indicating convergence of the algorithm. Crop yield data are used to illustrate the proposed methods.

The simulation study indicated that prediction error from the fitted model (estimated from either Method 1 or Method 2) is very low. Also, the prediction error is relatively robust to the number of spatial units and the number of time points.  相似文献   
30.
The evaluation of new processor designs is an important issue in electrical and computer engineering. Architects use simulations to evaluate designs and to understand trade‐offs and interactions among design parameters. However, due to the lengthy simulation time and limited resources, it is often practically impossible to simulate a full factorial design space. Effective sampling methods and predictive models are required. In this paper, the authors propose an automated performance predictive approach which employs an adaptive sampling scheme that interactively works with the predictive model to select samples for simulation. These samples are then used to build Bayesian additive regression trees, which in turn are used to predict the whole design space. Both real data analysis and simulation studies show that the method is effective in that, though sampling at very few design points, it generates highly accurate predictions on the unsampled points. Furthermore, the proposed model provides quantitative interpretation tools with which investigators can efficiently tune design parameters in order to improve processor performance. The Canadian Journal of Statistics 38: 136–152; 2010 © 2010 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号