首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   89584篇
  免费   1962篇
  国内免费   4篇
管理学   12177篇
民族学   530篇
人才学   25篇
人口学   6737篇
丛书文集   511篇
理论方法论   9349篇
综合类   2112篇
社会学   41318篇
统计学   18791篇
  2023年   508篇
  2021年   562篇
  2020年   1501篇
  2019年   2211篇
  2018年   2046篇
  2017年   3114篇
  2016年   2347篇
  2015年   2045篇
  2014年   2625篇
  2013年   18831篇
  2012年   2288篇
  2011年   2052篇
  2010年   1916篇
  2009年   2164篇
  2008年   2006篇
  2007年   1800篇
  2006年   2056篇
  2005年   2227篇
  2004年   2096篇
  2003年   1842篇
  2002年   1937篇
  2001年   1920篇
  2000年   1704篇
  1999年   1624篇
  1998年   1470篇
  1997年   1331篇
  1996年   1273篇
  1995年   1304篇
  1994年   1268篇
  1993年   1263篇
  1992年   1235篇
  1991年   1169篇
  1990年   1142篇
  1989年   998篇
  1988年   1093篇
  1987年   969篇
  1986年   852篇
  1985年   1026篇
  1984年   1107篇
  1983年   979篇
  1982年   924篇
  1981年   840篇
  1980年   805篇
  1979年   852篇
  1978年   760篇
  1977年   690篇
  1976年   646篇
  1975年   626篇
  1974年   505篇
  1973年   429篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Sample selection in radiocarbon dating   总被引:1,自引:0,他引:1  
Archaeologists working on the island of O'ahu, Hawai'i, use radiocarbon dating of samples of organic matter found trapped in fish-pond sediments to help them to learn about the chronology of the construction and use of the aquicultural systems created by the Polynesians. At one particular site, Loko Kuwili, 25 organic samples were obtained and funds were available to date an initial nine. However, on calibration to the calendar scale, the radiocarbon determinations provided date estimates that had very large variances. As a result, major issues of chronology remained unresolved and the archaeologists were faced with the prospect of another expensive programme of radiocarbon dating. This paper presents results of research that tackles the problems associated with selecting samples from those which are still available. Building on considerable recent research that utilizes Markov chain Monte Carlo methods to aid archaeologists in their radiocarbon calibration and interpretation, we adopt the standard Bayesian framework of risk functions, which allows us to assess the optimal samples to be sent for dating. Although rather computer intensive, our algorithms are simple to implement within the Bayesian radiocarbon framework that is already in place and produce results that are capable of direct interpretation by the archaeologists. By dating just three more samples from Loko Kuwili the expected variance on the date of greatest interest could be substantially reduced.  相似文献   
992.
A common problem in environmental epidemiology is the estimation and mapping of spatial variation in disease risk. In this paper we analyse data from the Walsall District Health Authority, UK, concerning the spatial distributions of cancer cases compared with controls sampled from the population register. We formulate the risk estimation problem as a nonparametric binary regression problem and consider two different methods of estimation. The first uses a standard kernel method with a cross-validation criterion for choosing the associated bandwidth parameter. The second uses the framework of the generalized additive model (GAM) which has the advantage that it can allow for additional explanatory variables, but is computationally more demanding. For the Walsall data, we obtain similar results using either the kernel method with controls stratified by age and sex to match the age–sex distribution of the cases or the GAM method with random controls but incorporating age and sex as additional explanatory variables. For cancers of the lung or stomach, the analysis shows highly statistically significant spatial variation in risk. For the less common cancers of the pancreas, the spatial variation in risk is not statistically significant.  相似文献   
993.
Data collected before the routine application of prenatal screening are of unique value in estimating the natural live-birth prevalence of Down syndrome. However, much of these data are from births from over 20 years ago and they are of uncertain quality. In particular, they are subject to varying degrees of underascertainment. Published approaches have used ad hoc corrections to deal with this problem or have been restricted to data sets in which ascertainment is assumed to be complete. In this paper we adopt a Bayesian approach to modelling ascertainment and live-birth prevalence. We consider three prior specifications concerning ascertainment and compare predicted maternal-age-specific prevalence under these three different prior specifications. The computations are carried out by using Markov chain Monte Carlo methods in which model parameters and missing data are sampled.  相似文献   
994.
The World Health Organization (WHO) diagnostic criteria for diabetes mellitus were determined in part by evidence that in some populations the plasma glucose level 2 h after an oral glucose load is a mixture of two distinct distributions. We present a finite mixture model that allows the two component densities to be generalized linear models and the mixture probability to be a logistic regression model. The model allows us to estimate the prevalence of diabetes and sensitivity and specificity of the diagnostic criteria as a function of covariates and to estimate them in the absence of an external standard. Sensitivity is the probability that a test indicates disease conditionally on disease being present. Specificity is the probability that a test indicates no disease conditionally on no disease being present. We obtained maximum likelihood estimates via the EM algorithm and derived the standard errors from the information matrix and by the bootstrap. In the application to data from the diabetes in Egypt project a two-component mixture model fits well and the two components are interpreted as normal and diabetes. The means and variances are similar to results found in other populations. The minimum misclassification cutpoints decrease with age, are lower in urban areas and are higher in rural areas than the 200 mg dl-1 cutpoint recommended by the WHO. These differences are modest and our results generally support the WHO criterion. Our methods allow the direct inclusion of concomitant data whereas past analyses were based on partitioning the data.  相似文献   
995.
LetX1,X2, ..., be real-valued random variables forming a strictly stationary sequence, and satisfying the basic requirement of being either pairwise positively quadrant dependent or pairwise negatively quadrant dependent. LetF^ be the marginal distribution function of theXips, which is estimated by the empirical distribution functionFn and also by a smooth kernel-type estimateFn, by means of the segmentX1, ...,Xn. These estimates are compared on the basis of their mean squared errors (MSE). The main results of this paper are the following. Under certain regularity conditions, the optimal bandwidth (in the MSE sense) is determined, and is found to be the same as that in the independent identically distributed case. It is also shown thatn MSE(Fn(t)) andnMSE (F^n(t)) tend to the same constant, asn→∞ so that one can not discriminate be tween the two estimates on the basis of the MSE. Next, ifi(n) = min {k∈{1, 2, ...}; MSE (Fk(t)) ≤ MSE (Fn(t))}, then it is proved thati(n)/n tends to 1, asn→∞. Thus, once again, one can not choose one estimate over the other in terms of their asymptotic relative efficiency. If, however, the squared bias ofF^n(t) tends to 0 sufficiently fast, or equivalently, the bandwidthhn satisfies the requirement thatnh3n→ 0, asn→∞, it is shown that, for a suitable choice of the kernel, (i(n) ?n)/(nhn) tends to a positive number, asn→∞ It follows that the deficiency ofFn(t) with respect toF^n(t),i(n) ?n, is substantial, and, actually, tends to ∞, asn→∞. In terms of deficiency, the smooth estimateF^n(t) is preferable to the empirical distribution functionFn(t)  相似文献   
996.
997.
SUMMARY A two-sample version of the non-parametric index of tracking for longitudinal data introduced by Foulkes and Davis is described. The index is based on a multivariate U -statistic, and provides a measure of the stochastic ordering of the underlying growth curves of the samples. The utility of the U -statistic approach is explored with two applications related to growth curves and repeated measures analyses.  相似文献   
998.
In electrical engineering, circuit designs are now often optimized via circuit simulation computer models. Typically, many response variables characterize the circuit's performance. Each response is a function of many input variables, including factors that can be set in the engineering design and noise factors representing manufacturing conditions. We describe a modelling approach which is appropriate for the simulator's deterministic input–output relationships. Non-linearities and interactions are identified without explicit assumptions about the functional form. These models lead to predictors to guide the reduction of the ranges of the designable factors in a sequence of experiments. Ultimately, the predictors are used to optimize the engineering design. We also show how a visualization of the fitted relationships facilitates an understanding of the engineering trade-offs between responses. The example used to demonstrate these methods, the design of a buffer circuit, has multiple targets for the responses, representing different trade-offs between the key performance measures.  相似文献   
999.
This paper considers two types of chaotic map time series models, including the well-known tent, logistic and binary-shift maps as special cases; these are called curved tent and curved binary families. Deterministic behaviour is investigated by invariant distributions, Lyapunov exponents, and by serial dependency. Stochastic time reversal of the families is shown to produce models which have a broader range of stochastic and chaotic properties than their deterministic counterparts. The marginal distributions may have concentrations and restricted supports and are shown to be a non-standard class of invariant distribution. Dependenc y is generally weaker with the reversed stochastic models. The work gives a broad statistical account of deterministic and stochastically reversed map models, such as are emerging in random number generation, communica tion systems and cryptography  相似文献   
1000.
Institutional Ethnography and Experience as Data   总被引:1,自引:0,他引:1  
Experience, as concept, is contested among feminists as to its epistemological status, thus its usefulness in knowledge claims. Institutional ethnography (Smith 1987) is a feminist methodology that nonetheless relies fundamentally on people's experience. Not as Truth, nor the object of inquiry, but as thepoint d'appui for sociological inquiry. This article offers a demonstration of institutional enthnography using observational and interview data that show experience as methodologically central to a trustworthy analysis. A moment in the work lives of nursing assistants in a long-term care setting is captured by a participant observer. The analysis produces two lines of argument. One is methodological; it is argued that nursing assistants' experiences are an entry into the social relations of the setting that, when mapped and disclosed, make those experiences understandable in terms of the ruling arrangements permeating both the organization and their own experiences. The other argument is substantive; the inquiry uncovers how a quality improvement' strategy in a long term care hospital in Canada is reorganizing caregivers' values and practices toward a market orientation in which care appears to be compromised. Use of experience as data in this approach holds the analysis accountable to everyday/everynight actualities in a lived world.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号