首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   491篇
  免费   21篇
管理学   14篇
丛书文集   1篇
综合类   4篇
社会学   1篇
统计学   492篇
  2022年   6篇
  2021年   2篇
  2020年   6篇
  2019年   17篇
  2018年   24篇
  2017年   40篇
  2016年   18篇
  2015年   16篇
  2014年   14篇
  2013年   159篇
  2012年   54篇
  2011年   12篇
  2010年   14篇
  2009年   13篇
  2008年   16篇
  2007年   7篇
  2006年   11篇
  2005年   13篇
  2004年   6篇
  2003年   7篇
  2002年   4篇
  2001年   5篇
  2000年   5篇
  1999年   9篇
  1998年   6篇
  1997年   5篇
  1996年   5篇
  1995年   4篇
  1994年   2篇
  1993年   3篇
  1992年   3篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1984年   1篇
  1981年   1篇
排序方式: 共有512条查询结果,搜索用时 15 毫秒
41.
A generalized random coefficient first-order integer-valued autoregressive process with signed thinning operator is introduced, this kind of process is appropriate for modeling negative integer-valued time series. Strict stationarity and ergodicity of process are established. Estimators of the parameters of interest are derived and their properties are studied via simulation. At last, we use bootstrap method in the real data analysis.  相似文献   
42.
Comments     

In this paper we compare Bartlett-corrected, bootstrap, and fast double bootstrap tests on maximum likelihood estimates of cointegration parameters. The key result is that both the bootstrap and the Bartlett-corrected tests must be based on the unrestricted estimates of the cointegrating vectors: procedures based on the restricted estimates have almost no power. The small sample size bias of the asymptotic test appears so severe as to advise strongly against its use with the sample sizes commonly available; the fast double bootstrap test minimizes size bias, while the Bartlett-corrected test is somehow more powerful.  相似文献   
43.
Methods for assessing the variability of an estimated contour of a density are discussed. A new method called the coverage plot is proposed. Techniques including sectioning and bootstrap techniques are compared for a particular problem which arises in Monte Carlo simulation approaches to estimating the spatial distribution of risk in the operation of weapons firing ranges. It is found that, for computational reasons, the sectioning procedure outperforms the bootstrap for this problem. The roles of bias and sample size are also seen in the examples shown.  相似文献   
44.
Abstract

A method for obtaining bootstrapping replicates for one-dimensional point processes is presented. The method involves estimating the conditional intensity of the process and computing residuals. The residuals are bootstrapped using a block bootstrap and used, together with the conditional intensity, to define the bootstrap realizations. The method is applied to the estimation of the cross-intensity function for data arising from a reaction time experiment.  相似文献   
45.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   
46.
Patients infected with the human immunodeficiency virus (HIV) generally experience a decline in their CD4 cell count (a count of certain white blood cells). We describe the use of quantile regression methods to analyse longitudinal data on CD4 cell counts from 1300 patients who participated in clinical trials that compared two therapeutic treatments: zidovudine and didanosine. It is of scientific interest to determine any treatment differences in the CD4 cell counts over a short treatment period. However, the analysis of the CD4 data is complicated by drop-outs: patients with lower CD4 cell counts at the base-line appear more likely to drop out at later measurement occasions. Motivated by this example, we describe the use of `weighted' estimating equations in quantile regression models for longitudinal data with drop-outs. In particular, the conventional estimating equations for the quantile regression parameters are weighted inversely proportionally to the probability of drop-out. This approach requires the process generating the missing data to be estimable but makes no assumptions about the distribution of the responses other than those imposed by the quantile regression model. This method yields consistent estimates of the quantile regression parameters provided that the model for drop-out has been correctly specified. The methodology proposed is applied to the CD4 cell count data and the results are compared with those obtained from an `unweighted' analysis. These results demonstrate how an analysis that fails to account for drop-outs can mislead.  相似文献   
47.
ABSTRACT

Traditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models.  相似文献   
48.
We show the second-order relative accuracy, on bounded sets, of the Studentized bootstrap, exponentially tilted bootstrap and nonparametric likelihood tilted bootstrap, for means and smooth functions of means. We also consider the relative errors for larger deviations. Our method exploits certain connections between Edgeworth and saddlepoint approximations to simplify the computations.  相似文献   
49.
Many of the available methods for estimating small-area parameters are model-based approaches in which auxiliary variables are used to predict the variable of interest. For models that are nonlinear, prediction is not straightforward. MacGibbon and Tomberlin and Farrell, MacGibbon, and Tomberlin have proposed approaches that require microdata for all individuals in a small area. In this article, we develop a method, based on a second-order Taylor-series expansion to obtain model-based predictions, that requires only local-area summary statistics for both continuous and categorical auxiliary variables. The methodology is evaluated using data based on a U.S. Census.  相似文献   
50.
In the estimation of a population mean or total from a random sample, certain methods based on linear models are known to be automatically design consistent, regardless of how well the underlying model describes the population. A sufficient condition is identified for this type of robustness to model failure; the condition, which we call 'internal bias calibration', relates to the combination of a model and the method used to fit it. Included among the internally bias-calibrated models, in addition to the aforementioned linear models, are certain canonical link generalized linear models and nonparametric regressions constructed from them by a particular style of local likelihood fitting. Other models can often be made robust by using a suboptimal fitting method. Thus the class of model-based, but design consistent, analyses is enlarged to include more realistic models for certain types of survey variable such as binary indicators and counts. Particular applications discussed are the estimation of the size of a population subdomain, as arises in tax auditing for example, and the estimation of a bootstrap tail probability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号