首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1334篇
  免费   93篇
  国内免费   12篇
管理学   223篇
民族学   6篇
人口学   24篇
丛书文集   49篇
理论方法论   35篇
综合类   477篇
社会学   122篇
统计学   503篇
  2023年   14篇
  2022年   15篇
  2021年   22篇
  2020年   39篇
  2019年   58篇
  2018年   56篇
  2017年   76篇
  2016年   51篇
  2015年   64篇
  2014年   86篇
  2013年   237篇
  2012年   90篇
  2011年   86篇
  2010年   61篇
  2009年   74篇
  2008年   57篇
  2007年   49篇
  2006年   47篇
  2005年   43篇
  2004年   35篇
  2003年   35篇
  2002年   32篇
  2001年   21篇
  2000年   15篇
  1999年   12篇
  1998年   9篇
  1997年   6篇
  1996年   7篇
  1995年   7篇
  1994年   8篇
  1993年   7篇
  1992年   3篇
  1991年   2篇
  1990年   3篇
  1988年   4篇
  1986年   1篇
  1984年   1篇
  1981年   2篇
  1979年   2篇
  1977年   2篇
排序方式: 共有1439条查询结果,搜索用时 937 毫秒
201.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   
202.
This study develops dose–response models for Ebolavirus using previously published data sets from the open literature. Two such articles were identified in which three different species of nonhuman primates were challenged by aerosolized Ebolavirus in order to study pathology and clinical disease progression. Dose groups were combined and pooled across each study in order to facilitate modeling. The endpoint of each experiment was death. The exponential and exact beta-Poisson models were fit to the data using maximum likelihood estimation. The exact beta-Poisson was deemed the recommended model because it more closely approximated the probability of response at low doses though both models provided a good fit. Although transmission is generally considered to be dominated by person-to-person contact, aerosolization is a possible route of exposure. If possible, this route of exposure could be particularly concerning for persons in occupational roles managing contaminated liquid wastes from patients being treated for Ebola infection and the wastewater community responsible for disinfection. Therefore, this study produces a necessary mathematical relationship between exposure dose and risk of death for the inhalation route of exposure that can support quantitative microbial risk assessment aimed at informing risk mitigation strategies including personal protection policies against occupational exposures.  相似文献   
203.
ABSTRACT

Web surveys are an established data collection mode that use written language to provide information. The written language is accompanied by visual elements, such as presentation formats and shapes. However, research has shown that visual elements influence response behavior because respondents sometimes use interpretive heuristics to make sense of the visual elements. One such heuristic is the ‘left and top means first’ (LTMF) heuristic, which suggests that respondents tend to believe that a response scale consistently runs from left to right or from top to bottom. We conducted a web survey experiment to investigate how violations of the LTMF heuristic affect response behavior and data quality. For this purpose, a random half of respondents received response options that followed a consistent order and the other half received response options that followed an inconsistent order. The results reveal significantly different response distributions between the two groups. We also found that inconsistently ordered response options significantly increase response times and decrease data quality in terms of criterion validity. We, therefore, recommend using options that follow the design strategies of the LTMF heuristic.  相似文献   
204.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   
205.
Robert M. Park 《Risk analysis》2020,40(12):2561-2571
Uncertainty in model predictions of exposure response at low exposures is a problem for risk assessment. A particular interest is the internal concentration of an agent in biological systems as a function of external exposure concentrations. Physiologically based pharmacokinetic (PBPK) models permit estimation of internal exposure concentrations in target tissues but most assume that model parameters are either fixed or instantaneously dose-dependent. Taking into account response times for biological regulatory mechanisms introduces new dynamic behaviors that have implications for low-dose exposure response in chronic exposure. A simple one-compartment simulation model is described in which internal concentrations summed over time exhibit significant nonlinearity and nonmonotonicity in relation to external concentrations due to delayed up- or downregulation of a metabolic pathway. These behaviors could be the mechanistic basis for homeostasis and for some apparent hormetic effects.  相似文献   
206.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   
207.
In a clinical trial, sometimes it is desirable to allocate as many patients as possible to the best treatment, in particular, when a trial for a rare disease may contain a considerable portion of the whole target population. The Gittins index rule is a powerful tool for sequentially allocating patients to the best treatment based on the responses of patients already treated. However, its application in clinical trials is limited due to technical complexity and lack of randomness. Thompson sampling is an appealing approach, since it makes a compromise between optimal treatment allocation and randomness with some desirable optimal properties in the machine learning context. However, in clinical trial settings, multiple simulation studies have shown disappointing results with Thompson samplers. We consider how to improve short-run performance of Thompson sampling and propose a novel acceleration approach. This approach can also be applied to situations when patients can only be allocated by batch and is very easy to implement without using complex algorithms. A simulation study showed that this approach could improve the performance of Thompson sampling in terms of average total response rate. An application to a redesign of a preference trial to maximize patient's satisfaction is also presented.  相似文献   
208.
Gupta et al. and Huang considered optional randomized response techniques where the probability of choosing the randomized (or direct) response is fixed for all the respondents. In this paper the assumption of the constant probability of choosing the option has been relaxed by dividing respondents into two groups: one group provides direct response and the second a randomized response. The method of estimation of the population mean and variances under the modified assumption are obtained. Relative efficiencies of the proposed techniques are compared theoretically and empirically.  相似文献   
209.
In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios.  相似文献   
210.
This study focuses on the estimation of population mean of a sensitive variable in stratified random sampling based on randomized response technique (RRT) when the observations are contaminated by measurement errors (ME). A generalized estimator of population mean is proposed by using additively scrambled responses for the sensitive variable. The expressions for the bias and mean square error (MSE) of the proposed estimator are derived. The performance of the proposed estimator is evaluated both theoretically and empirically. Results are also applied to a real data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号