首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22007篇
  免费   610篇
  国内免费   2篇
管理学   3222篇
民族学   83篇
人才学   2篇
人口学   2078篇
丛书文集   99篇
教育普及   2篇
理论方法论   2046篇
现状及发展   1篇
综合类   285篇
社会学   10918篇
统计学   3883篇
  2023年   104篇
  2021年   124篇
  2020年   308篇
  2019年   429篇
  2018年   505篇
  2017年   685篇
  2016年   558篇
  2015年   393篇
  2014年   518篇
  2013年   3490篇
  2012年   716篇
  2011年   696篇
  2010年   519篇
  2009年   452篇
  2008年   533篇
  2007年   545篇
  2006年   547篇
  2005年   540篇
  2004年   491篇
  2003年   464篇
  2002年   520篇
  2001年   578篇
  2000年   583篇
  1999年   521篇
  1998年   386篇
  1997年   367篇
  1996年   386篇
  1995年   341篇
  1994年   321篇
  1993年   332篇
  1992年   387篇
  1991年   359篇
  1990年   327篇
  1989年   323篇
  1988年   318篇
  1987年   283篇
  1986年   260篇
  1985年   301篇
  1984年   301篇
  1983年   284篇
  1982年   246篇
  1981年   221篇
  1980年   187篇
  1979年   215篇
  1978年   193篇
  1977年   187篇
  1976年   162篇
  1975年   170篇
  1974年   128篇
  1973年   133篇
排序方式: 共有10000条查询结果,搜索用时 62 毫秒
41.
42.
BIAS IN LIST-ASSISTED TELEPHONE SAMPLES   总被引:4,自引:1,他引:3  
A number of researchers have suggested list-assisted samplingfor the selection of telephone households to overcome some ofthe operational difficulties associated with the Mitofsky-Waksbergmethods of random digit dialing (RDD). An advantage of a list-assistedmethod of RDD is that an equal probability systematic sampleof telephone numbers can be selected and the variances of estimatesfrom such a sample are usually lower than from a clustered designlike the Mitofsky-Waksberg method. The main disadvantage ofthe list-assisted method is that it excludes some householdsfrom the sample, thus creating a coverage bias in the estimates.This article describes research on the coverage bias for a particularmethod of list-assisted sampling. The two key determinants ofcoverage bias are the proportion of households that are noteligible for the sample and the differences in the characteristicsof the covered and not covered populations. The results showthat about 4 percent of all households are excluded in nationalsamples using this method of sampling. Furthermore, they showthat the differences between the covered and uncovered populationsare generally not large. The coverage bias resulting from theseconditions may often be small.  相似文献   
43.
This article shows the influence of being a refugee from Latin America or a nonrefugee immigrant from southern Europe or Finland on self-reported illness, controlling for social factors and lifestyle. The study population consisted of 338 Latin American refugees, a random sample of 396 Finnish and 161 southern European immigrants and 996 age-, sex- and education-matched Swedish controls. The data were analysed unmatched with logistic regression (multivariate analysis) in main effect models. The strongest independent risk indicator for long-term illness was being a Latin American refugee (estimated odds ratio (OR)=2.96, 95% confidence interval (CI)=2.19–3.82). There was a significant association between being a Latin American refugee and period prevalence, ill health and unsatisfied need for care. Being a southern European or Finnish immigrant was a risk indicator of ill health but was not associated with the other dependent factors. Not feeling secure in daily life was a strong risk indicator for long-term illness and ill health (estimated OR=1.89, 95% CI=1.26–2.76 and OR=3.04, 95% CI= 1.97–4.48) respectively). Being a Latin American refugee was equal in importance to traditional risk factors such as overweight and not taking regular exercise for long-term illness and ill health.  相似文献   
44.
Summary.  Alongside the development of meta-analysis as a tool for summarizing research literature, there is renewed interest in broader forms of quantitative synthesis that are aimed at combining evidence from different study designs or evidence on multiple parameters. These have been proposed under various headings: the confidence profile method, cross-design synthesis, hierarchical models and generalized evidence synthesis. Models that are used in health technology assessment are also referred to as representing a synthesis of evidence in a mathematical structure. Here we review alternative approaches to statistical evidence synthesis, and their implications for epidemiology and medical decision-making. The methods include hierarchical models, models informed by evidence on different functions of several parameters and models incorporating both of these features. The need to check for consistency of evidence when using these powerful methods is emphasized. We develop a rationale for evidence synthesis that is based on Bayesian decision modelling and expected value of information theory, which stresses not only the need for a lack of bias in estimates of treatment effects but also a lack of bias in assessments of uncertainty. The increasing reliance of governmental bodies like the UK National Institute for Clinical Excellence on complex evidence synthesis in decision modelling is discussed.  相似文献   
45.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
46.
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships.  相似文献   
47.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000.  相似文献   
48.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
49.
50.
Children may be more susceptible to toxicity from some environmental chemicals than adults. This susceptibility may occur during narrow age periods (windows), which can last from days to years depending on the toxicant. Breathing rates specific to narrow age periods are useful to assess inhalation dose during suspected windows of susceptibility. Because existing breathing rates used in risk assessment are typically for broad age ranges or are based on data not representative of the population, we derived daily breathing rates for narrow age ranges of children designed to be more representative of the current U.S. children's population. These rates were derived using the metabolic conversion method of Layton (1993) and energy intake data adjusted to represent the U.S. population from a relatively recent dietary survey (CSFII 1994–1996, 1998). We calculated conversion factors more specific to children than those previously used. Both nonnormalized (L/day) and normalized (L/kg-day) breathing rates were derived and found comparable to rates derived using energy estimates that are accurate for the individuals sampled but not representative of the population. Estimates of breathing rate variability within a population can be used with stochastic techniques to characterize the range of risk in the population from inhalation exposures. For each age and age-gender group, we present the mean, standard error of the mean, percentiles (50th, 90th, and 95th), geometric mean, standard deviation, 95th percentile, and best-fit parametric models of the breathing rate distributions. The standard errors characterize uncertainty in the parameter estimate, while the percentiles describe the combined interindividual and intra-individual variability of the sampled population. These breathing rates can be used for risk assessment of subchronic and chronic inhalation exposures of narrow age groups of children.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号