首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3172篇
  免费   105篇
  国内免费   39篇
管理学   1310篇
民族学   20篇
人口学   30篇
丛书文集   124篇
理论方法论   252篇
综合类   1015篇
社会学   311篇
统计学   254篇
  2024年   3篇
  2023年   32篇
  2022年   37篇
  2021年   51篇
  2020年   93篇
  2019年   89篇
  2018年   79篇
  2017年   96篇
  2016年   91篇
  2015年   93篇
  2014年   143篇
  2013年   294篇
  2012年   156篇
  2011年   165篇
  2010年   107篇
  2009年   161篇
  2008年   145篇
  2007年   134篇
  2006年   136篇
  2005年   158篇
  2004年   133篇
  2003年   104篇
  2002年   122篇
  2001年   94篇
  2000年   79篇
  1999年   49篇
  1998年   33篇
  1997年   40篇
  1996年   26篇
  1995年   25篇
  1994年   39篇
  1993年   40篇
  1992年   56篇
  1991年   23篇
  1990年   31篇
  1989年   27篇
  1988年   24篇
  1987年   26篇
  1986年   21篇
  1985年   9篇
  1984年   13篇
  1983年   13篇
  1982年   6篇
  1981年   14篇
  1980年   5篇
  1979年   1篇
排序方式: 共有3316条查询结果,搜索用时 15 毫秒
31.
The standard tensile test is one of the most frequent tools performed for the evaluation of mechanical properties of metals. An empirical model proposed by Ramberg and Osgood fits the tensile test data using a nonlinear model for the strain in terms of the stress. It is an Error-In-Variables (EIV) model because of the uncertainty affecting both strain and stress measurement instruments. The SIMEX, a simulation-based method for the estimation of model parameters, is powerful in order to reduce bias due to the measurement error in EIV models. The plan of this article is the following. In Sec. 2, we introduce the Ramberg–Osgood model and another reparametrization according to different assumptions on the independent variable. In Sec. 3, there is a summary of SIMEX method for the case at hand. Section 4 is a comparison between SIMEX and others estimating methods in order to highlight the peculiarities of the different approaches. In the last section, there are some concluding remarks.  相似文献   
32.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   
33.
By representing fair betting odds according to one or more pairs of confidence set estimators, dual parameter distributions called confidence posteriors secure the coherence of actions without any prior distribution. This theory reduces to the maximization of expected utility when the pair of posteriors is induced by an exact or approximate confidence set estimator or when a reduction rule is applied to the pair. Unlike the p-value, the confidence posterior probability of an interval hypothesis is suitable as an estimator of the indicator of hypothesis truth since it converges to 1 if the hypothesis is true or to 0 otherwise.  相似文献   
34.
Given a most believed value for a quantity together with upper and lower possible deviations from that value, a rectangular distribution might be used to represent state-of-knowledge about the quantity. If the deviations are themselves known by probability distributions, and the value conditioned on the deviations is rectangular, then the marginal distribution of the value is determined by the distributions of the deviations. Here we show under quite general conditions that conversely, given the marginal distribution, the distributions of the deviations are uniquely determined. The case in which the marginal distribution is trapezoidal is studied in some detail.  相似文献   
35.
This article presents some results showing how rectangular probabilities can be studied using copula theory. These results lead us to develop new lower and upper bounds for rectangular probabilities which can be computed efficiently. The new bounds are compared with the ones obtained from the generalized Fréchet–Hoeffding bounds and Bonferroni-type inequalities.  相似文献   
36.
The asymptotic normality of the Cramer-von Mises one-sample test statistic and one of its variants under an alternative cdf is demonstrated. The derivation herein is unique in that it does not require knowledge of the theory of weak convergence of probability measures defined on metrized function spaces, and thus is accessible to a broader class of students and practitioners.  相似文献   
37.
A new method is proposed for measuring the distance between a training data set and a single, new observation. The novel distance measure reflects the expected squared prediction error when a quantitative response variable is predicted on the basis of the training data set using the distance weighted k-nearest-neighbor method. The simulation presented here shows that the distance measure correlates well with the true expected squared prediction error in practice. The distance measure can be applied, for example, in assessing the uncertainty of prediction.  相似文献   
38.
We provide a comprehensive analysis of the out-of-sample performance of a wide variety of spot rate models in forecasting the probability density of future interest rates. Although the most parsimonious models perform best in forecasting the conditional mean of many financial time series, we find that the spot rate models that incorporate conditional heteroscedasticity and excess kurtosis or heavy tails have better density forecasts. Generalized autoregressive conditional heteroscedasticity significantly improves the modeling of the conditional variance and kurtosis, whereas regime switching and jumps improve the modeling of the marginal density of interest rates. Our analysis shows that the sophisticated spot rate models in the existing literature are important for applications involving density forecasts of interest rates.  相似文献   
39.
We develop a methodology for examining savings behavior in rural areas of developing countries that explicitly incorporates the sequential decision process in agriculture. The approach is used to examine the relative importance of alternative forms of savings in the presence and absence of formal financial intermediaries. Our results, based on stage-specific panel data from Pakistan, provide evidence that the presence of financial intermediaries importantly influences the use of formal savings and transfers for income smoothing. We also find that there are significant biases in evaluations of the savings-income relationship that are inattentive to the within-year dynamics of agricultural production.  相似文献   
40.
Using relative utility curves to evaluate risk prediction   总被引:2,自引:0,他引:2  
Summary.  Because many medical decisions are based on risk prediction models that are constructed from medical history and results of tests, the evaluation of these prediction models is important. This paper makes five contributions to this evaluation: the relative utility curve which gauges the potential for better prediction in terms of utilities, without the need for a reference level for one utility, while providing a sensitivity analysis for misspecification of utilities, the relevant region, which is the set of values of prediction performance that are consistent with the recommended treatment status in the absence of prediction, the test threshold, which is the minimum number of tests that would be traded for a true positive prediction in order for the expected utility to be non-negative, the evaluation of two-stage predictions that reduce test costs and connections between various measures of performance of prediction. An application involving the risk of cardiovascular disease is discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号