首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9269篇
  免费   213篇
  国内免费   66篇
管理学   869篇
劳动科学   2篇
民族学   57篇
人口学   163篇
丛书文集   379篇
理论方法论   250篇
综合类   2337篇
社会学   863篇
统计学   4628篇
  2024年   7篇
  2023年   99篇
  2022年   78篇
  2021年   100篇
  2020年   182篇
  2019年   248篇
  2018年   401篇
  2017年   586篇
  2016年   306篇
  2015年   300篇
  2014年   351篇
  2013年   1817篇
  2012年   690篇
  2011年   365篇
  2010年   334篇
  2009年   344篇
  2008年   355篇
  2007年   398篇
  2006年   361篇
  2005年   379篇
  2004年   321篇
  2003年   306篇
  2002年   242篇
  2001年   220篇
  2000年   172篇
  1999年   96篇
  1998年   83篇
  1997年   69篇
  1996年   58篇
  1995年   34篇
  1994年   39篇
  1993年   31篇
  1992年   28篇
  1991年   23篇
  1990年   15篇
  1989年   14篇
  1988年   21篇
  1987年   12篇
  1986年   7篇
  1985年   10篇
  1984年   8篇
  1983年   12篇
  1982年   10篇
  1981年   4篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
排序方式: 共有9548条查询结果,搜索用时 15 毫秒
131.
Abstract

In this article, we propose a penalized local log-likelihood method to locally select the number of components in non parametric finite mixture of regression models via proportion shrinkage method. Mean functions and variance functions are estimated simultaneously. We show that the number of components can be estimated consistently, and further establish asymptotic normality of functional estimates. We use a modified EM algorithm to estimate the unknown functions. Simulations are conducted to demonstrate the performance of the proposed method. We illustrate our method via an empirical analysis of the housing price index data of United States.  相似文献   
132.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   
133.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
134.
In this paper, we study the joint Laplace transform and probability generating function of some random quantities that occur in each environment state by the time of ruin in a Markov-modulated risk process. These quantities include the duration spent in each state, the number of claims and the aggregate amount of claims that occurred in each state by the time of ruin. Explicit formulae for the joint transforms, given the initial surplus, and the initial and terminal environment states, are expressed in terms of a matrix version of the scale function. Moments and covariances of these ruin-related quantities are obtained and numerical illustrations are presented. The joint transform of the duration spent in each state, the number of claims, and the aggregate amount of claims that occurred in each state by the time the surplus attains a certain level are also investigated.  相似文献   
135.
The class of affine LIBOR models is appealing since it satisfies three central requirements of interest rate modeling. It is arbitrage-free, interest rates are nonnegative, and caplet and swaption prices can be calculated analytically. In order to guarantee nonnegative interest rates affine LIBOR models are driven by nonnegative affine processes, a restriction that makes it hard to produce volatility smiles. We modify the affine LIBOR models in such a way that real-valued affine processes can be used without destroying the nonnegativity of interest rates. Numerical examples show that in this class of models, pronounced volatility smiles are possible.  相似文献   
136.
现代汉语中存在着这样一种特殊的复合时间结构——T1+(的)+T2,T1、T2均是具有指示性的时点。在言语交际过程中, 人们常使用“T1+(的)+T2”这类复合时间结构的语言表达式,将原来具有一定距离的“事件”(包括已然事件和未然事件)拉回 到现时,从而产生加强现场感的语用效果,且站在说话人角度,具有积极的交际效果。  相似文献   
137.
平稳性检验方法的有效性研究   总被引:2,自引:1,他引:1  
平稳性检验是时间序列分析的重要研究内容,现有检验方法的性能缺乏系统的比较分析。文章从样本长度的视角研究平稳性检验方法的性能,采用ADF检验、PP检验、KPSS检验和LMC检验四种方法展开实证研究。仿真实验结果表明:时间序列数据长度会对检验方法的准确率产生明显的影响,数据长度较小时检验准确率偏低;数据长度增大时可以提升检验方法的准确率,但仍未能达到100%的上限值。当样本长度较小时,这些方法的检验统计量的渐进分布难以满足,因此其实际检验效果值得探究。样本长度是有限的,因此渐进分布检验方式的改进空间有限,新的检验方式值得探究。  相似文献   
138.
We consider confidence intervals for the stress–strength reliability Pr(X< Y) in the two-parameter exponential distribution. We have derived the Bayesian highest posterior density interval using non-informative prior distributions. We have compared its performance with the intervals based on the generalized pivot variable intervals in terms of their coverage probabilities and expected lengths. Our simulation study shows that the Bayesian interval performs better according to the criteria used, especially when the sample sizes are very small. An example is given.  相似文献   
139.
A practical problem with large-scale survey data is the possible presence of overdispersion. It occurs when the data display more variability than is predicted by the variance–mean relationship. This article describes a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion. Specifically, several tests for detecting overdispersion will be implemented on the basis of the asymptotic theory for maximum likelihood estimators. We discuss the results of a simulation experiment concerning log-likelihood ratio, Wald, Score, and Profile tests. Finally, some real datasets are analyzed to illustrate the previous results.  相似文献   
140.
After initiation of treatment, HIV viral load has multiphasic changes, which indicates that the viral decay rate is a time-varying process. Mixed-effects models with different time-varying decay rate functions have been proposed in literature. However, there are two unresolved critical issues: (i) it is not clear which model is more appropriate for practical use, and (ii) the model random errors are commonly assumed to follow a normal distribution, which may be unrealistic and can obscure important features of within- and among-subject variations. Because asymmetry of HIV viral load data is still noticeable even after transformation, it is important to use a more general distribution family that enables the unrealistic normal assumption to be relaxed. We developed skew-elliptical (SE) Bayesian mixed-effects models by considering the model random errors to have an SE distribution. We compared the performance among five SE models that have different time-varying decay rate functions. For each model, we also contrasted the performance under different model random error assumptions such as normal, Student-t, skew-normal, or skew-t distribution. Two AIDS clinical trial datasets were used to illustrate the proposed models and methods. The results indicate that the model with a time-varying viral decay rate that has two exponential components is preferred. Among the four distribution assumptions, the skew-t and skew-normal models provided better fitting to the data than normal or Student-t model, suggesting that it is important to assume a model with a skewed distribution in order to achieve reasonable results when the data exhibit skewness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号