首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3795篇
  免费   83篇
  国内免费   15篇
管理学   401篇
民族学   6篇
人口学   69篇
丛书文集   55篇
理论方法论   95篇
综合类   363篇
社会学   251篇
统计学   2653篇
  2024年   1篇
  2023年   27篇
  2022年   34篇
  2021年   38篇
  2020年   62篇
  2019年   109篇
  2018年   155篇
  2017年   232篇
  2016年   119篇
  2015年   120篇
  2014年   126篇
  2013年   866篇
  2012年   272篇
  2011年   140篇
  2010年   124篇
  2009年   156篇
  2008年   147篇
  2007年   143篇
  2006年   123篇
  2005年   134篇
  2004年   113篇
  2003年   93篇
  2002年   72篇
  2001年   74篇
  2000年   68篇
  1999年   55篇
  1998年   50篇
  1997年   41篇
  1996年   20篇
  1995年   18篇
  1994年   28篇
  1993年   18篇
  1992年   20篇
  1991年   13篇
  1990年   11篇
  1989年   8篇
  1988年   12篇
  1987年   7篇
  1986年   4篇
  1985年   8篇
  1984年   6篇
  1983年   9篇
  1982年   8篇
  1981年   1篇
  1980年   2篇
  1979年   2篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有3893条查询结果,搜索用时 15 毫秒
91.
Loddon Mallee Integrated Cancer Service plays a key role in planning the delivery of cancer services in the Loddon Mallee Region of Victoria, Australia. Forecasting the incidence of cancer is an important part of planning for these services. This article is written from an industry perspective. We describe the context of our work, review the literature on forecasting the incidence of cancer, discuss contemporary approaches, describe our experience with forecasting models, and list issues associated with applying these models. An extensive bibliography illustrates the world-wide interest in this forecasting problem. We hope that it is useful to researchers.  相似文献   
92.
In this article, we present a compressive sensing based framework for generalized linear model regression that employs a two-component noise model and convex optimization techniques to simultaneously detect outliers and determine optimally sparse representations of noisy data from arbitrary sets of basis functions. We then extend our model to include model order reduction capabilities that can uncover inherent sparsity in regression coefficients and achieve simple, superior fits. Second, we use the mixed ?2/?1 norm to develop another model that can efficiently uncover block-sparsity in regression coefficients. By performing model order reduction over all independent variables and basis functions, our algorithms successfully deemphasize the effect of independent variables that become uncorrelated with dependent variables. This desirable property has various applications in real-time anomaly detection, such as faulty sensor detection and sensor jamming in wireless sensor networks. After developing our framework and inheriting a stable recovery theorem from compressive sensing theory, we present two simulation studies on sparse or block-sparse problems that demonstrate the superior performance of our algorithms with respect to (1) classic outlier-invariant regression techniques like least absolute value and iteratively reweighted least-squares and (2) classic sparse-regularized regression techniques like LASSO.  相似文献   
93.
Interval-censored data naturally arise in many studies. For their regression analysis, many approaches have been proposed under various models and for most of them, the inference is carried out based on the asymptotic normality. In particular, Zhang et al. (2005) discussed the procedure under the linear transformation model. It is well-known that the symmetric property implied by the normal distribution may not be appropriate sometimes. Also the method could underestimate the variance of estimated parameters. This paper proposes an empirical likelihood-based procedure for the problem. Simulation and the analysis of a real data set are conducted to assess the performance of the procedure.  相似文献   
94.
Abstract

This paper searches for A-optimal designs for Kronecker product and additive regression models when the errors are heteroscedastic. Sufficient conditions are given so that A-optimal designs for the multifactor models can be built from A-optimal designs for their sub-models with a single factor. The results of an efficiency study carried out to check the adequacy of the products of optimal designs for uni-factor marginal models when these are used to estimate different multi-factor models are also reported.  相似文献   
95.
ABSTRACT

This article investigates a quasi-maximum exponential likelihood estimator(QMELE) for a non stationary generalized autoregressive conditional heteroscedastic (GARCH(1,1)) model. Asymptotic normality of this estimator is derived under a non stationary condition. A simulation study and a real example are given to evaluate the performance of QMELE for this model.  相似文献   
96.
ABSTRACT

We consider semiparametric inference on the partially linearsingle-index model (PLSIM). The generalized likelihood ratio (GLR) test is proposed to examine whether or not a family of new semiparametric models fits adequately our given data in the PLSIM. A new GLR statistic is established to deal with the testing of the index parameter α0 in the PLSIM. The newly proposed statistic is shown to asymptotically follow a χ2-distribution with the scale constant and the degrees of freedom being independent of the nuisance parameters or function. Some finite sample simulations and a real example are used to illustrate our proposed methodology.  相似文献   
97.
ABSTRACT

Markov chain Monte Carlo (MCMC) methods can be used for statistical inference. The methods are time-consuming due to time-vary. To resolve these problems, parallel tempering (PT), as a parallel MCMC method, is tried, for dynamic generalized linear models (DGLMs), as well as the several optimal properties of our proposed method. In PT, two or more samples are drawn at the same time, and samples can exchange information with each other. We also present some simulations of the DGLMs in the case and provide two applications of Poisson-type DGLMs in financial research.  相似文献   
98.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   
99.
ABSTRACT

In this paper, we extend a variance shift model, previously considered in the linear mixed models, to the linear mixed measurement error models using the corrected likelihood of Nakamura (1990 Nakamura, T. (1990). Corrected score function for errors in variables models: methodology and application to generalized linear models. Biometrika 77:127137.[Crossref], [Web of Science ®] [Google Scholar]). This model assumes that a single outlier arises from an observation with inflated variance. We derive the score test and the analogue of the likelihood ratio test, to assess whether the ith observation has inflated variance. A parametric bootstrap procedure is implemented to obtain empirical distributions of the test statistics. Finally, results of a simulation study and an example of real data are presented to illustrate the performance of proposed tests.  相似文献   
100.
Abstract

In this paper, we discuss how to model the mean and covariancestructures in linear mixed models (LMMs) simultaneously. We propose a data-driven method to modelcovariance structures of the random effects and random errors in the LMMs. Parameter estimation in the mean and covariances is considered by using EM algorithm, and standard errors of the parameter estimates are calculated through Louis’ (1982 Louis, T.A. (1982). Finding observed information using the EM algorithm. J. Royal Stat. Soc. B 44:98130. [Google Scholar]) information principle. Kenward’s (1987 Kenward, M.G. (1987). A method for comparing profiles of repeated measurements. Appl. Stat. 36:296308.[Crossref], [Web of Science ®] [Google Scholar]) cattle data sets are analyzed for illustration,and comparison to the literature work is made through simulation studies. Our numerical analysis confirms the superiority of the proposed method to existing approaches in terms of Akaike information criterion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号