首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87535篇
  免费   1948篇
  国内免费   3篇
管理学   12004篇
民族学   527篇
人才学   25篇
人口学   6424篇
丛书文集   501篇
理论方法论   9206篇
综合类   2044篇
社会学   40095篇
统计学   18660篇
  2023年   509篇
  2021年   557篇
  2020年   1490篇
  2019年   2185篇
  2018年   1987篇
  2017年   3065篇
  2016年   2304篇
  2015年   2032篇
  2014年   2615篇
  2013年   18815篇
  2012年   2169篇
  2011年   1961篇
  2010年   1862篇
  2009年   2136篇
  2008年   1926篇
  2007年   1720篇
  2006年   1997篇
  2005年   2177篇
  2004年   2069篇
  2003年   1804篇
  2002年   1893篇
  2001年   1870篇
  2000年   1652篇
  1999年   1580篇
  1998年   1444篇
  1997年   1286篇
  1996年   1247篇
  1995年   1276篇
  1994年   1246篇
  1993年   1230篇
  1992年   1189篇
  1991年   1123篇
  1990年   1101篇
  1989年   943篇
  1988年   1046篇
  1987年   919篇
  1986年   811篇
  1985年   989篇
  1984年   1060篇
  1983年   943篇
  1982年   890篇
  1981年   824篇
  1980年   776篇
  1979年   824篇
  1978年   734篇
  1977年   664篇
  1976年   623篇
  1975年   599篇
  1974年   485篇
  1973年   410篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
A non-linear model for examining genotypic responses across an array of environments is contrasted with the 'joint regression' formulation, and a rigorous approach to hypothesis testing using the conditional error principle is demonstrated. The model is extended to cater for situations where single straight-line response patterns fail to characterize genotypic behaviors over an environmental array: a combination of two straight lines, with slope in below-average and in above-average environments, is offered as the 1 2 simplest representation of convex and concave patterns. A protocol for classifying genotypes according to the results of hypothesis tests, i.e. H( = ) and H( = = = 1), is 1 2 1 2 presented . A doubly desirable response pattern is convex ( < 1< ), while a doubly 1 2 undesirable pattern is concave ( > 1> ). 1 2  相似文献   
992.
Sample selection in radiocarbon dating   总被引:1,自引:0,他引:1  
Archaeologists working on the island of O'ahu, Hawai'i, use radiocarbon dating of samples of organic matter found trapped in fish-pond sediments to help them to learn about the chronology of the construction and use of the aquicultural systems created by the Polynesians. At one particular site, Loko Kuwili, 25 organic samples were obtained and funds were available to date an initial nine. However, on calibration to the calendar scale, the radiocarbon determinations provided date estimates that had very large variances. As a result, major issues of chronology remained unresolved and the archaeologists were faced with the prospect of another expensive programme of radiocarbon dating. This paper presents results of research that tackles the problems associated with selecting samples from those which are still available. Building on considerable recent research that utilizes Markov chain Monte Carlo methods to aid archaeologists in their radiocarbon calibration and interpretation, we adopt the standard Bayesian framework of risk functions, which allows us to assess the optimal samples to be sent for dating. Although rather computer intensive, our algorithms are simple to implement within the Bayesian radiocarbon framework that is already in place and produce results that are capable of direct interpretation by the archaeologists. By dating just three more samples from Loko Kuwili the expected variance on the date of greatest interest could be substantially reduced.  相似文献   
993.
A common problem in environmental epidemiology is the estimation and mapping of spatial variation in disease risk. In this paper we analyse data from the Walsall District Health Authority, UK, concerning the spatial distributions of cancer cases compared with controls sampled from the population register. We formulate the risk estimation problem as a nonparametric binary regression problem and consider two different methods of estimation. The first uses a standard kernel method with a cross-validation criterion for choosing the associated bandwidth parameter. The second uses the framework of the generalized additive model (GAM) which has the advantage that it can allow for additional explanatory variables, but is computationally more demanding. For the Walsall data, we obtain similar results using either the kernel method with controls stratified by age and sex to match the age–sex distribution of the cases or the GAM method with random controls but incorporating age and sex as additional explanatory variables. For cancers of the lung or stomach, the analysis shows highly statistically significant spatial variation in risk. For the less common cancers of the pancreas, the spatial variation in risk is not statistically significant.  相似文献   
994.
Data collected before the routine application of prenatal screening are of unique value in estimating the natural live-birth prevalence of Down syndrome. However, much of these data are from births from over 20 years ago and they are of uncertain quality. In particular, they are subject to varying degrees of underascertainment. Published approaches have used ad hoc corrections to deal with this problem or have been restricted to data sets in which ascertainment is assumed to be complete. In this paper we adopt a Bayesian approach to modelling ascertainment and live-birth prevalence. We consider three prior specifications concerning ascertainment and compare predicted maternal-age-specific prevalence under these three different prior specifications. The computations are carried out by using Markov chain Monte Carlo methods in which model parameters and missing data are sampled.  相似文献   
995.
The World Health Organization (WHO) diagnostic criteria for diabetes mellitus were determined in part by evidence that in some populations the plasma glucose level 2 h after an oral glucose load is a mixture of two distinct distributions. We present a finite mixture model that allows the two component densities to be generalized linear models and the mixture probability to be a logistic regression model. The model allows us to estimate the prevalence of diabetes and sensitivity and specificity of the diagnostic criteria as a function of covariates and to estimate them in the absence of an external standard. Sensitivity is the probability that a test indicates disease conditionally on disease being present. Specificity is the probability that a test indicates no disease conditionally on no disease being present. We obtained maximum likelihood estimates via the EM algorithm and derived the standard errors from the information matrix and by the bootstrap. In the application to data from the diabetes in Egypt project a two-component mixture model fits well and the two components are interpreted as normal and diabetes. The means and variances are similar to results found in other populations. The minimum misclassification cutpoints decrease with age, are lower in urban areas and are higher in rural areas than the 200 mg dl-1 cutpoint recommended by the WHO. These differences are modest and our results generally support the WHO criterion. Our methods allow the direct inclusion of concomitant data whereas past analyses were based on partitioning the data.  相似文献   
996.
LetX1,X2, ..., be real-valued random variables forming a strictly stationary sequence, and satisfying the basic requirement of being either pairwise positively quadrant dependent or pairwise negatively quadrant dependent. LetF^ be the marginal distribution function of theXips, which is estimated by the empirical distribution functionFn and also by a smooth kernel-type estimateFn, by means of the segmentX1, ...,Xn. These estimates are compared on the basis of their mean squared errors (MSE). The main results of this paper are the following. Under certain regularity conditions, the optimal bandwidth (in the MSE sense) is determined, and is found to be the same as that in the independent identically distributed case. It is also shown thatn MSE(Fn(t)) andnMSE (F^n(t)) tend to the same constant, asn→∞ so that one can not discriminate be tween the two estimates on the basis of the MSE. Next, ifi(n) = min {k∈{1, 2, ...}; MSE (Fk(t)) ≤ MSE (Fn(t))}, then it is proved thati(n)/n tends to 1, asn→∞. Thus, once again, one can not choose one estimate over the other in terms of their asymptotic relative efficiency. If, however, the squared bias ofF^n(t) tends to 0 sufficiently fast, or equivalently, the bandwidthhn satisfies the requirement thatnh3n→ 0, asn→∞, it is shown that, for a suitable choice of the kernel, (i(n) ?n)/(nhn) tends to a positive number, asn→∞ It follows that the deficiency ofFn(t) with respect toF^n(t),i(n) ?n, is substantial, and, actually, tends to ∞, asn→∞. In terms of deficiency, the smooth estimateF^n(t) is preferable to the empirical distribution functionFn(t)  相似文献   
997.
998.
SUMMARY A two-sample version of the non-parametric index of tracking for longitudinal data introduced by Foulkes and Davis is described. The index is based on a multivariate U -statistic, and provides a measure of the stochastic ordering of the underlying growth curves of the samples. The utility of the U -statistic approach is explored with two applications related to growth curves and repeated measures analyses.  相似文献   
999.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   
1000.
This paper considers two types of chaotic map time series models, including the well-known tent, logistic and binary-shift maps as special cases; these are called curved tent and curved binary families. Deterministic behaviour is investigated by invariant distributions, Lyapunov exponents, and by serial dependency. Stochastic time reversal of the families is shown to produce models which have a broader range of stochastic and chaotic properties than their deterministic counterparts. The marginal distributions may have concentrations and restricted supports and are shown to be a non-standard class of invariant distribution. Dependenc y is generally weaker with the reversed stochastic models. The work gives a broad statistical account of deterministic and stochastically reversed map models, such as are emerging in random number generation, communica tion systems and cryptography  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号