首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4137篇
  免费   112篇
  国内免费   13篇
管理学   275篇
民族学   2篇
人口学   44篇
丛书文集   56篇
理论方法论   24篇
综合类   412篇
社会学   49篇
统计学   3400篇
  2023年   22篇
  2022年   33篇
  2021年   30篇
  2020年   68篇
  2019年   152篇
  2018年   168篇
  2017年   273篇
  2016年   133篇
  2015年   86篇
  2014年   120篇
  2013年   1183篇
  2012年   356篇
  2011年   116篇
  2010年   140篇
  2009年   145篇
  2008年   135篇
  2007年   102篇
  2006年   107篇
  2005年   105篇
  2004年   84篇
  2003年   80篇
  2002年   82篇
  2001年   72篇
  2000年   63篇
  1999年   59篇
  1998年   54篇
  1997年   43篇
  1996年   24篇
  1995年   21篇
  1994年   28篇
  1993年   19篇
  1992年   24篇
  1991年   11篇
  1990年   15篇
  1989年   11篇
  1988年   18篇
  1987年   8篇
  1986年   6篇
  1985年   5篇
  1984年   13篇
  1983年   15篇
  1982年   8篇
  1981年   6篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
  1966年   1篇
排序方式: 共有4262条查询结果,搜索用时 15 毫秒
21.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   
22.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   
23.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours.  相似文献   
24.
We propose a new modified (biased) cross-validation method for adaptively determining the bandwidth in a nonparametric density estimation setup. It is shown that the method provides consistent minimizers. Some simulation results are reported on which compare the small sample behavior of the new and the classical cross-validation selectors.  相似文献   
25.
可持续发展理论的经济学反思   总被引:15,自引:2,他引:13  
由于可持续发展理论自身存在着一定的缺陷 ,使得这一理论在实践中实施效果不佳。在对可持续发展的概念、可持续发展与外部性的关系、生态成本等基本问题进行反思的基础上 ,提出在可持续发展的实施过程中要区分两类不同的问题。对于代内的现期环境问题可采取科斯手段与庇古手段来加强治理 ,而对于代际累积的生态问题则需要通过强化制度安排和政策引导来予以解决。  相似文献   
26.
通过建立多元线性回归的数学模型,利用最小二乘估计得到正规方程组并进行相关性检验,从而解决相关实际问题。  相似文献   
27.
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments.  相似文献   
28.
The small sample performance of least median of squares, reweighted least squares, least squares, least absolute deviations, and three partially adaptive estimators are compared using Monte Carlo simulations. Two data problems are addressed in the paper: (1) data generated from non-normal error distributions and (2) contaminated data. Breakdown plots are used to investigate the sensitivity of partially adaptive estimators to data contamination relative to RLS. One partially adaptive estimator performs especially well when the errors are skewed, while another partially adaptive estimator and RLS perform particularly well when the errors are extremely leptokur-totic. In comparison with RLS, partially adaptive estimators are only moderately effective in resisting data contamination; however, they outperform least squares and least absolute deviation estimators.  相似文献   
29.
The L1 and L2-errors of the histogram estimate of a density f from a sample X1,X2,…,Xn using a cubic partition are shown to be asymptotically normal without any unnecessary conditions imposed on the density f. The asymptotic variances are shown to depend on f only through the corresponding norm of f. From this follows the asymptotic null distribution of a goodness-of-fit test based on the total variation distance, introduced by Györfi and van der Meulen (1991). This note uses the idea of partial inversion for obtaining characteristic functions of conditional distributions, which goes back at least to Bartlett (1938).  相似文献   
30.
Gini’s nuclear family   总被引:1,自引:0,他引:1  
The purpose of this paper is to justify the use of the Gini coefficient and two close relatives for summarizing the basic information of inequality in distributions of income. To this end we employ a specific transformation of the Lorenz curve, the scaled conditional mean curve, rather than the Lorenz curve as the basic formal representation of inequality in distributions of income. The scaled conditional mean curve is shown to possess several attractive properties as an alternative interpretation of the information content of the Lorenz curve and furthermore proves to yield essential information on polarization in the population. The paper also provides asymptotic distribution results for the empirical scaled conditional mean curve and the related family of empirical measures of inequality.   相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号