全文获取类型
收费全文 | 3884篇 |
免费 | 84篇 |
国内免费 | 16篇 |
专业分类
管理学 | 395篇 |
民族学 | 6篇 |
人口学 | 70篇 |
丛书文集 | 60篇 |
理论方法论 | 111篇 |
综合类 | 373篇 |
社会学 | 254篇 |
统计学 | 2715篇 |
出版年
2024年 | 5篇 |
2023年 | 39篇 |
2022年 | 38篇 |
2021年 | 52篇 |
2020年 | 71篇 |
2019年 | 110篇 |
2018年 | 159篇 |
2017年 | 227篇 |
2016年 | 115篇 |
2015年 | 118篇 |
2014年 | 126篇 |
2013年 | 861篇 |
2012年 | 325篇 |
2011年 | 143篇 |
2010年 | 124篇 |
2009年 | 156篇 |
2008年 | 148篇 |
2007年 | 145篇 |
2006年 | 121篇 |
2005年 | 135篇 |
2004年 | 112篇 |
2003年 | 95篇 |
2002年 | 71篇 |
2001年 | 76篇 |
2000年 | 69篇 |
1999年 | 55篇 |
1998年 | 51篇 |
1997年 | 38篇 |
1996年 | 19篇 |
1995年 | 16篇 |
1994年 | 29篇 |
1993年 | 20篇 |
1992年 | 19篇 |
1991年 | 13篇 |
1990年 | 11篇 |
1989年 | 8篇 |
1988年 | 12篇 |
1987年 | 7篇 |
1986年 | 4篇 |
1985年 | 8篇 |
1984年 | 7篇 |
1983年 | 9篇 |
1982年 | 8篇 |
1981年 | 1篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有3984条查询结果,搜索用时 0 毫秒
61.
Gabriele Brondino 《统计学通讯:模拟与计算》2013,42(2):407-417
The standard tensile test is one of the most frequent tools performed for the evaluation of mechanical properties of metals. An empirical model proposed by Ramberg and Osgood fits the tensile test data using a nonlinear model for the strain in terms of the stress. It is an Error-In-Variables (EIV) model because of the uncertainty affecting both strain and stress measurement instruments. The SIMEX, a simulation-based method for the estimation of model parameters, is powerful in order to reduce bias due to the measurement error in EIV models. The plan of this article is the following. In Sec. 2, we introduce the Ramberg–Osgood model and another reparametrization according to different assumptions on the independent variable. In Sec. 3, there is a summary of SIMEX method for the case at hand. Section 4 is a comparison between SIMEX and others estimating methods in order to highlight the peculiarities of the different approaches. In the last section, there are some concluding remarks. 相似文献
62.
This article deals with the bootstrap as an alternative method to construct confidence intervals for the hyperparameters of structural models. The bootstrap procedure considered is the classical nonparametric bootstrap in the residuals of the fitted model using a well-known approach. The performance of this procedure is empirically obtained through Monte Carlo simulations implemented in Ox. Asymptotic and percentile bootstrap confidence intervals for the hyperparameters are built and compared by means of the coverage percentages. The results are similar but the bootstrap procedure is better for small sample sizes. The methods are applied to a real time series and confidence intervals are built for the hyperparameters. 相似文献
63.
Patrizio Frederic 《统计学通讯:模拟与计算》2013,42(7):1263-1269
We display the first two moment functions of the Logitnormal(μ, σ2) family of distributions, conveniently described in terms of the Normal mean, μ, and the Normal signal-to-noise ratio, μ/σ, parameters that generate the family. Long neglected on account of the numerical integrations required to compute them, awareness of these moment functions should aid the sensible interpretation of logistic regression statistics and the specification of “diffuse” prior distributions in hierarchical models, which can be deceiving. We also use numerical integration to compare the correlation between bivariate Logitnormal variables with the correlation between the bivariate Normal variables from which they are transformed. 相似文献
64.
Zhensheng Huang 《统计学通讯:模拟与计算》2013,42(10):2252-2263
An empirical likelihood method was proposed by Owen and has been extended to many semiparametric and nonparametric models with a continuous response variable. However, there has been less attention focused on the generalized regression model. This article systematically studies two adjusted empirical-likelihood-based methods in the generalized varying-coefficient partially linear models. Based on the popular profile likelihood estimation procedure, the new adjusted empirical likelihood technology for the parameter is established and the resulting statistics are shown to be asymptotically standard chi-square distributed. Further, the adjusted empirical-likelihood-based confidence regions are established, and an efficient adjusted profile empirical-likelihood-based confidence intervals/regions for any components of the parameter, which are of primary interest, is also constructed. Their asymptotic properties are also derived. Some numerical studies are carried out to illustrate the performance of the proposed inference procedures. 相似文献
65.
Two families of parameter estimation procedures for the stable laws based on a variant of the characteristic function are provided. The methodology which produces viable computational procedures for the stable laws is generally applicable to other families of distributions across a variety of settings. Both families of procedures may be described as a modified weighted chi-squared minimization procedure, and both explicitly take account of constraints on the parameter space. Influence func-tions for and efficiencies of the estimators are given. If x1, x2, …xn random sample from an unknown distribution F , a method for determining the stable law to which F is attracted is developed. Procedures for regression and autoregres-sion with stable error structure are provided. A number of examples are given. 相似文献
66.
In this paper, asymptotic relative efficiency (ARE) of Wald tests for the Tweedie class of models with log-linear mean, is considered when the aux¬iliary variable is measured with error. Wald test statistics based on the naive maximum likelihood estimator and on a consistent estimator which is obtained by using Nakarnura's (1990) corrected score function approach are defined. As shown analytically, the Wald statistics based on the naive and corrected score function estimators are asymptotically equivalents in terms of ARE. On the other hand, the asymptotic relative efficiency of the naive and corrected Wald statistic with respect to the Wald statistic based on the true covariate equals to the square of the correlation between the unobserved and the observed co-variate. A small scale numerical Monte Carlo study and an example illustrate the small sample size situation. 相似文献
67.
Colin Chen 《统计学通讯:理论与方法》2013,42(5-6):1257-1271
Following the extension from linear mixed models to additive mixed models, extension from generalized linear mixed models to generalized additive mixed models is made, Algorithms are developed to compute the MLE's of the nonlinear effects and the covariance structures based on the penalized marginal likelihood. Convergence of the algorithms and selection of the smooth param¬eters are discussed. 相似文献
68.
We propose a simulation-based Bayesian approach to the analysis of long memory stochastic volatility models, stationary and nonstationary. The main tool used to reduce the likelihood function to a tractable form is an approximate state-space representation of the model, A data set of stock market returns is analyzed with the proposed method. The approach taken here allows a quantitative assessment of the empirical evidence in favor of the stationarity, or nonstationarity, of the instantaneous volatility of the data. 相似文献
69.
ABSTRACTThis article investigates a quasi-maximum exponential likelihood estimator(QMELE) for a non stationary generalized autoregressive conditional heteroscedastic (GARCH(1,1)) model. Asymptotic normality of this estimator is derived under a non stationary condition. A simulation study and a real example are given to evaluate the performance of QMELE for this model. 相似文献
70.
ABSTRACTTraditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models. 相似文献