首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1153篇
  免费   20篇
  国内免费   9篇
管理学   100篇
民族学   1篇
人口学   6篇
丛书文集   7篇
理论方法论   9篇
综合类   275篇
社会学   6篇
统计学   778篇
  2023年   2篇
  2022年   8篇
  2021年   5篇
  2020年   22篇
  2019年   31篇
  2018年   46篇
  2017年   55篇
  2016年   39篇
  2015年   38篇
  2014年   35篇
  2013年   315篇
  2012年   76篇
  2011年   35篇
  2010年   27篇
  2009年   44篇
  2008年   36篇
  2007年   28篇
  2006年   29篇
  2005年   31篇
  2004年   25篇
  2003年   26篇
  2002年   24篇
  2001年   19篇
  2000年   15篇
  1999年   14篇
  1998年   15篇
  1997年   26篇
  1996年   9篇
  1995年   14篇
  1994年   12篇
  1993年   8篇
  1992年   8篇
  1991年   12篇
  1990年   6篇
  1989年   4篇
  1988年   12篇
  1987年   5篇
  1986年   7篇
  1985年   5篇
  1984年   2篇
  1983年   3篇
  1982年   1篇
  1981年   2篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1182条查询结果,搜索用时 15 毫秒
41.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   
42.
The binary logistic regression is a widely used statistical method when the dependent variable has two categories. In most of the situations of logistic regression, independent variables are collinear which is called the multicollinearity problem. It is known that multicollinearity affects the variance of maximum likelihood estimator (MLE) negatively. Therefore, this article introduces new shrinkage parameters for the Liu-type estimators in the Liu (2003) in the logistic regression model defined by Huang (2012) in order to decrease the variance and overcome the problem of multicollinearity. A Monte Carlo study is designed to show the goodness of the proposed estimators over MLE in the sense of mean squared error (MSE) and mean absolute error (MAE). Moreover, a real data case is given to demonstrate the advantages of the new shrinkage parameters.  相似文献   
43.
We propose an exploratory data analysis approach when data are observed as intervals in a nonparametric regression setting. The interval-valued data contain richer information than single-valued data in the sense that they provide both center and range information of the underlying structure. Conventionally, these two attributes have been studied separately as traditional tools can be readily used for single-valued data analysis. We propose a unified data analysis tool that attempts to capture the relationship between response and covariate by simultaneously accounting for variability present in the data. It utilizes a kernel smoothing approach, which is conducted in scale-space so that it considers a wide range of smoothing parameters rather than selecting an optimal value. It also visually summarizes the significance of trends in the data as a color map across multiple locations and scales. We demonstrate its effectiveness as an exploratory data analysis tool for interval-valued data using simulated and real examples.  相似文献   
44.
The recursive least squares technique is often extended with exponential forgetting as a tool for parameter estimation in time-varying systems. The distribution of the resulting parameter estimates is, however, unknown when the forgetting factor is less than one. In this paper an approximative expression for bias of the recursively obtained parameter estimates in a time-invariant AR( na ) process with arbitrary noise is given, showing that the bias is non-zero and giving bounds on the approximation errors. Simulations confirm the approximation expressions.  相似文献   
45.
In this paper, we consider the estimation of the three determining parameters of the efficient frontier, the expected return, and the variance of the global minimum variance portfolio and the slope parameter, from a Bayesian perspective. Their posterior distribution is derived by assigning the diffuse and the conjugate priors to the mean vector and the covariance matrix of the asset returns and is presented in terms of a stochastic representation. Furthermore, Bayesian estimates together with the standard uncertainties for all three parameters are provided, and their asymptotic distributions are established. All obtained findings are applied to real data, consisting of the returns on assets included into the S&P 500. The empirical properties of the efficient frontier are then examined in detail.  相似文献   
46.
A smoothed bootstrap method is presented for the purpose of bandwidth selection in nonparametric hazard rate estimation for iid data. In this context, two new bootstrap bandwidth selectors are established based on the exact expression of the bootstrap version of the mean integrated squared error of some approximations of the kernel hazard rate estimator. This is very useful since Monte Carlo approximation is no longer needed for the implementation of the two bootstrap selectors. A simulation study is carried out in order to show the empirical performance of the new bootstrap bandwidths and to compare them with other existing selectors. The methods are illustrated by applying them to a diabetes data set.  相似文献   
47.
In this paper, we discuss some theoretical results and properties of the discrete Weibull distribution, which was introduced by Nakagawa and Osaki [The discrete Weibull distribution. IEEE Trans Reliab. 1975;24:300–301]. We study the monotonicity of the probability mass, survival and hazard functions. Moreover, reliability, moments, p-quantiles, entropies and order statistics are also studied. We consider likelihood-based methods to estimate the model parameters based on complete and censored samples, and to derive confidence intervals. We also consider two additional methods to estimate the model parameters. The uniqueness of the maximum likelihood estimate of one of the parameters that index the discrete Weibull model is discussed. Numerical evaluation of the considered model is performed by Monte Carlo simulations. For illustrative purposes, two real data sets are analyzed.  相似文献   
48.
Prior to 2002, little was known about sexual abuse within the Catholic Church. After the Boston Globe broke the story about John Geoghan - a priest in the Boston Archdiocese who was accused of abusing numerous children, convicted of one count of indecent assault, and eventually murdered in prison - the Church had many questions to answer. To this end, the United States Conference of Catholic Bishops (USCCB) commissioned John Jay College of Criminal Justice to research the nature and scope, as well as the causes and context of child sexual abuse within the Catholic Church.This research analyzes the data from the John Jay studies using a new quantitative technique, capable of adjusting for distortions introduced by delays in abuse reporting. By isolating discontinuities in model parameter timeseries, we determine changes in reporting patterns occurred during the period 1982-1988. A posteriori to the analysis, we provide some possible explanations for the changes in abuse reporting associated with the change-point. While the scope of this paper is limited to presenting a new methodological approach within the frame of a particular case study, the techniques are more broadly applicable in settings where reporting lag is manifested.  相似文献   
49.
带基约束的投资组合问题是近年来投资组合领域的热点问题,但是参数不确定性直接影响了模型的效果。带基约束的投资组合问题所涉及的参数不仅包括以往研究认为非常重要的预期收益率,还包括控制投资组合规模的稀疏度,尤其是最优稀疏度估计方面的专门研究还十分匮乏。为了使带基约束的投资组合模型更好地为投资决策服务,本文从投资者效用出发,用双层规划的思想构建了带基约束的投资组合双层参数估计模型。然后根据模型的特点,设计了无导数优化算法框架,并基于ADMM对算法子问题进行求解。本文实验针对真实的市场数据给出了预期收益率和最优稀疏度的估计,接着通过与等权重策略和含上下界约束的均值-方差模型进行比较,说明了模型及算法的有效性和实用性。最后,将本文提出的双层参数估计模型推广到了更一般的形式。  相似文献   
50.
对二项分布比例参数p的似然比置信区间,提出一种简便求解方法。在平均覆盖率、平均区间长度及区间长度的95%置信区间准则下与WScore、Plus4、Jeffreys置信区间进行模拟比较。试验表明,在二项分布b(n,p)的参数n≥20且p∈(0.1,0.9)时,该方法获取的似然比置信区间性能优良。当点估计p值不是接近于0或1且n≥20时,推荐使用本方法获取p的置信区间。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号