首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6521篇
  免费   180篇
  国内免费   14篇
管理学   279篇
劳动科学   1篇
民族学   1篇
人口学   46篇
丛书文集   27篇
理论方法论   19篇
综合类   454篇
社会学   32篇
统计学   5856篇
  2023年   37篇
  2022年   51篇
  2021年   45篇
  2020年   116篇
  2019年   241篇
  2018年   276篇
  2017年   438篇
  2016年   203篇
  2015年   164篇
  2014年   195篇
  2013年   2016篇
  2012年   600篇
  2011年   172篇
  2010年   182篇
  2009年   197篇
  2008年   185篇
  2007年   140篇
  2006年   139篇
  2005年   130篇
  2004年   152篇
  2003年   116篇
  2002年   105篇
  2001年   109篇
  2000年   97篇
  1999年   95篇
  1998年   96篇
  1997年   70篇
  1996年   40篇
  1995年   31篇
  1994年   39篇
  1993年   35篇
  1992年   34篇
  1991年   14篇
  1990年   24篇
  1989年   16篇
  1988年   18篇
  1987年   10篇
  1986年   9篇
  1985年   6篇
  1984年   16篇
  1983年   16篇
  1982年   9篇
  1981年   6篇
  1980年   4篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1976年   2篇
  1975年   4篇
  1973年   1篇
排序方式: 共有6715条查询结果,搜索用时 328 毫秒
31.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   
32.
Detection and correction of artificial shifts in climate series   总被引:6,自引:0,他引:6  
Summary.  Many long instrumental climate records are available and might provide useful information in climate research. These series are usually affected by artificial shifts, due to changes in the conditions of measurement and various kinds of spurious data. A comparison with surrounding weather-stations by means of a suitable two-factor model allows us to check the reliability of the series. An adapted penalized log-likelihood procedure is used to detect an unknown number of breaks and outliers. An example concerning temperature series from France confirms that a systematic comparison of the series together is valuable and allows us to correct the data even when no reliable series can be taken as a reference.  相似文献   
33.
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process.  相似文献   
34.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   
35.
Summary.  The pattern of absenteeism in the downsizing process of companies is a topic in focus in economics and social science. A general question is whether employees who are frequently absent are more likely to be selected to be laid off or in contrast whether employees to be dismissed are more likely to be absent for the remaining time of their working contract. We pursue an empirical and microeconomic investigation of these theses. We analyse longitudinal data that were collected in a German company over several years. We fit a semiparametric transition model based on a mixture Poisson distribution for the days of absenteeism per month. Prediction intervals are considered and the primary focus is on the period of downsizing. The data reveal clear evidence for the hypothesis that employees who are to be laid off are more frequently absent before leaving the company. Interestingly, though, no clear evidence is seen that employees being selected to leave the company are those with a bad absenteeism profile.  相似文献   
36.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   
37.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   
38.
We propose a new modified (biased) cross-validation method for adaptively determining the bandwidth in a nonparametric density estimation setup. It is shown that the method provides consistent minimizers. Some simulation results are reported on which compare the small sample behavior of the new and the classical cross-validation selectors.  相似文献   
39.
Summary. Semiparametric mixed models are useful in biometric and econometric applications, especially for longitudinal data. Maximum penalized likelihood estimators (MPLEs) have been shown to work well by Zhang and co-workers for both linear coefficients and nonparametric functions. This paper considers the role of influence diagnostics in the MPLE by extending the case deletion and subject deletion analysis of linear models to accommodate the inclusion of a nonparametric component. We focus on influence measures for the fixed effects and provide formulae that are analogous to those for simpler models and readily computable with the MPLE algorithm. We also establish an equivalence between the case or subject deletion model and a mean shift outlier model from which we derive tests for outliers. The influence diagnostics proposed are illustrated through a longitudinal hormone study on progesterone and a simulated example.  相似文献   
40.
通过建立多元线性回归的数学模型,利用最小二乘估计得到正规方程组并进行相关性检验,从而解决相关实际问题。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号