首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4179篇
  免费   142篇
  国内免费   16篇
管理学   408篇
民族学   10篇
人口学   85篇
丛书文集   62篇
理论方法论   99篇
综合类   387篇
社会学   258篇
统计学   3028篇
  2023年   27篇
  2022年   21篇
  2021年   40篇
  2020年   71篇
  2019年   126篇
  2018年   168篇
  2017年   240篇
  2016年   124篇
  2015年   130篇
  2014年   141篇
  2013年   1029篇
  2012年   322篇
  2011年   167篇
  2010年   146篇
  2009年   174篇
  2008年   162篇
  2007年   152篇
  2006年   123篇
  2005年   141篇
  2004年   118篇
  2003年   101篇
  2002年   77篇
  2001年   77篇
  2000年   75篇
  1999年   64篇
  1998年   58篇
  1997年   41篇
  1996年   22篇
  1995年   21篇
  1994年   29篇
  1993年   18篇
  1992年   20篇
  1991年   13篇
  1990年   11篇
  1989年   8篇
  1988年   12篇
  1987年   8篇
  1986年   4篇
  1985年   8篇
  1984年   11篇
  1983年   9篇
  1982年   11篇
  1981年   6篇
  1980年   2篇
  1979年   4篇
  1978年   3篇
  1977年   1篇
  1976年   1篇
排序方式: 共有4337条查询结果,搜索用时 15 毫秒
11.
Modelling daily multivariate pollutant data at multiple sites   总被引:7,自引:1,他引:6  
Summary. This paper considers the spatiotemporal modelling of four pollutants measured daily at eight monitoring sites in London over a 4-year period. Such multiple-pollutant data sets measured over time at multiple sites within a region of interest are typical. Here, the modelling was carried out to provide the exposure for a study investigating the health effects of air pollution. Alternative objectives include the design problem of the positioning of a new monitoring site, or for regulatory purposes to determine whether environmental standards are being met. In general, analyses are hampered by missing data due, for example, to a particular pollutant not being measured at a site, a monitor being inactive by design (e.g. a 6-day monitoring schedule) or because of an unreliable or faulty monitor. Data of this type are modelled here within a dynamic linear modelling framework, in which the dependences across time, space and pollutants are exploited. Throughout the approach is Bayesian, with implementation via Markov chain Monte Carlo sampling.  相似文献   
12.
Sets of relatively short time series arise in many situations. One aspect of their analysis may be the detection of outlying series. We examine the performance of standard normal outlier tests applied to the means, or to simple functions of the means, of AR(1) series, not necessarily of equal lengths. Although unequal lengths of series implies that the means have unequal variances, that are only known approximately, it is shown that nominal significance levels hold good under most circumstances. Thus a standard outlier test can usefully be applied, avoiding the complication of estimating the time series' parameters. The test's power is affected by unequal lengths, being higher when the slippage occurs in one of the longer series  相似文献   
13.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
14.
Merging information for semiparametric density estimation   总被引:1,自引:0,他引:1  
Summary.  The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed.  相似文献   
15.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed.  相似文献   
16.
Conservation biology aims at assessing the status of a population, based on information which is often incomplete. Integrated population modelling based on state‐space models appears to be a powerful and relevant way of combining into a single likelihood several types of information such as capture‐recapture data and population surveys. In this paper, the authors describe the principles of integrated population modelling and they evaluate its performance for conservation biology based on a case study, that of the black‐footed albatross, a northern Pacific albatross species suspected to be impacted by longline fishing  相似文献   
17.
18.
By approximating the nonparametric component using a regression spline in generalized partial linear models (GPLM), robust generalized estimating equations (GEE), involving bounded score function and leverage-based weighting function, can be used to estimate the regression parameters in GPLM robustly for longitudinal data or clustered data. In this paper, score test statistics are proposed for testing the regression parameters with robustness, and their asymptotic distributions under the null hypothesis and a class of local alternative hypotheses are studied. The proposed score tests reply on the estimation of a smaller model without the testing parameters involved, and perform well in the simulation studies and real data analysis conducted in this paper.  相似文献   
19.
Quantifying uncertainty in the biospheric carbon flux for England and Wales   总被引:1,自引:0,他引:1  
Summary.  A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO2) concentrations in the atmosphere. Vegetation can extract CO2 through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 1012 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered.  相似文献   
20.
建立了S油田勘探、开发、炼化、机械、公用工程等多个部门和全局的非线性多级目标优化规划模型,应用关联分析、改进灰色预测、回归分析求取规划模型的约束方程并线性化;编制了相应的计算软件,使之快速预测和优化油田各部门“九五”各年的投资和产值;并将优化结果与油田过去或计划值加以对比,给油田规划决策带来一定的参考。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号