首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   381篇
  免费   7篇
  国内免费   1篇
管理学   96篇
人才学   1篇
人口学   10篇
丛书文集   22篇
理论方法论   14篇
综合类   63篇
社会学   24篇
统计学   159篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   5篇
  2019年   11篇
  2018年   8篇
  2017年   14篇
  2016年   4篇
  2015年   9篇
  2014年   18篇
  2013年   80篇
  2012年   32篇
  2011年   26篇
  2010年   18篇
  2009年   20篇
  2008年   14篇
  2007年   11篇
  2006年   12篇
  2005年   10篇
  2004年   12篇
  2003年   7篇
  2002年   6篇
  2001年   10篇
  2000年   6篇
  1999年   6篇
  1998年   6篇
  1997年   8篇
  1996年   4篇
  1995年   3篇
  1994年   6篇
  1993年   1篇
  1991年   1篇
  1990年   1篇
  1987年   2篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有389条查询结果,搜索用时 15 毫秒
41.
A statistical distribution of a random variable is uniquely represented by its normal-based quantile function. For a symmetrical distribution it is S-shaped (for negative kurtosis) and inverted S-shaped (otherwise). As skewness departs from zero, the quantile function gradually transforms into a monotone convex function (positive skewness) or concave function (otherwise). Recently, a new general modeling platform has been introduced, response modeling methodology, which delivers good representation to monotone convex relationships due to its unique “continuous monotone convexity” property. In this article, this property is exploited to model the normal-based quantile function, and explored using a set of 27 distributions.  相似文献   
42.
In split-plot experiments, estimation of unknown parameters by generalized least squares (GLS), as opposed to ordinary least squares (OLS), is required, owing to the existence of whole- and subplot errors. However, estimating the error variances is often necessary for GLS. Restricted maximum likelihood (REML) is an established method for estimating the error variances, and its benefits have been highlighted in many previous studies. This article proposes a new two-step residual-based approach for estimating error variances. Results of numerical simulations indicate that the proposed method performs sufficiently well to be considered as a suitable alternative to REML.  相似文献   
43.
The purpose of this paper is to revisit the response surface technique ridge analysis within the context of the “trust region” problem in numerical analysis. It is found that these two approaches inherently solve the same problem. We introduce the computational difficulty, termed the “hard case”, which originates in the trust region methods, also exists in ridge analysis but has never been formally discussed in response surface methodology (RSM). The dual response global optimization algorithm (DRSALG) based on the trust region method is applied (with a certain modification) to solving the ridge analysis problem. Some numerical comparisons against a general-purpose nonlinear optimization algorithm are illustrated in terms of examples appearing in the literature  相似文献   
44.
Adaptive designs of clinical trials are ethical alternatives when the traditional randomization becomes ethically infeasible in desperate medical situations. However, such a design creates a dependency among trial data and its statistical analysis becomes more complex than the analysis for traditional randomized clinical trials. In this article, we examine adaptive designs with dichotomous responses from two treatments and extend some commonly used statistical methods for independent data. Under a regularity condition, the estimated odds ratio and its logarithm are shown to follow asymptotically normal distributions. Moreover, the ordinary goodness-of-fit test statistic for two-by-two contingency tables with dependent data is shown to be asymptotically chi-square distributed. We also discuss the consistency of maximum likelihood estimators of the unknown parameters for a wide class of adaptive designs.  相似文献   
45.
Two kinds of sequential designs are proposed for finding the point that maximizes the probability of response assuming a binary response variable and a quadratic logistic regression model. One is a parametric optimal design approach, and the other one is a nonparametric stochastic approximation approach. The suggested sequential designs are evaluated and compared in a simulation study. In summary, the parametric approach performed very well whereas its competitor failed in some cases.  相似文献   
46.
ABSTRACT

Games can be a powerful tool for learning about statistical methodology. Effective game design involves a fine balance between caricature and realism, to simultaneously illustrate salient concepts in a controlled setting and serve as a testament to real-world applicability. Striking that balance is particularly challenging in response surface and design domains, where real-world scenarios often play out over long time scales, during which theories are revised, model and inferential techniques are improved, and knowledge is updated. Here, I present a game, borrowing liberally from one first played over 40 years ago, which attempts to achieve that balance while reinforcing a cascade of topics in modern nonparametric response surfaces, sequential design, and optimization. The game embeds a blackbox simulation within a shiny app whose interface is designed to simulate a realistic information–availability setting, while offering a stimulating, competitive environment wherein students can try out new methodology, and ultimately appreciate its power and limitations. Interface, rules, timing with course material, and evaluation are described, along with a “case study” involving a cohort of students at Virginia Tech. Supplementary materials for this article are available online.  相似文献   
47.
Conventional spirometry produces measurement error by using repeatability criteria (RC) to discard acceptable data and terminating tests early when RC are met. These practices also implicitly assume that there is no variation across maneuvers within each test. This has implications for air pollution regulations that rely on pulmonary function tests to determine adverse effects or set standards. We perform a Monte Carlo simulation of 20,902 tests of forced expiratory volume in 1 second (FEV1), each with eight maneuvers, for an individual with empirically obtained, plausibly normal pulmonary function. Default coefficients of variation for inter‐ and intratest variability (3% and 6%, respectively) are employed. Measurement error is defined as the difference between results from the conventional protocol and an unconstrained, eight‐maneuver alternative. In the default model, average measurement error is shown to be ~5%. The minimum difference necessary for statistical significance at p < 0.05 for a before/after comparison is shown to be 16%. Meanwhile, the U.S. Environmental Protection Agency has deemed single‐digit percentage decrements in FEV1 sufficient to justify more stringent national ambient air quality standards. Sensitivity analysis reveals that results are insensitive to intertest variability but highly sensitive to intratest variability. Halving the latter to 3% reduces measurement error by 55%. Increasing it to 9% or 12% increases measurement error by 65% or 125%, respectively. Within‐day FEV1 differences ≤5% among normal subjects are believed to be clinically insignificant. Therefore, many differences reported as statistically significant are likely to be artifactual. Reliable data are needed to estimate intratest variability for the general population, subpopulations of interest, and research samples. Sensitive subpopulations (e.g., chronic obstructive pulmonary disease or COPD patients, asthmatics, children) are likely to have higher intratest variability, making it more difficult to derive valid statistical inferences about differences observed after treatment or exposure.  相似文献   
48.
Topics in Microbial Risk Assessment: Dynamic Flow Tree Process   总被引:5,自引:0,他引:5  
Microbial risk assessment is emerging as a new discipline in risk assessment. A systematic approach to microbial risk assessment is presented that employs data analysis for developing parsimonious models and accounts formally for the variability and uncertainty of model inputs using analysis of variance and Monte Carlo simulation. The purpose of the paper is to raise and examine issues in conducting microbial risk assessments. The enteric pathogen Escherichia coli O157:H7 was selected as an example for this study due to its significance to public health. The framework for our work is consistent with the risk assessment components described by the National Research Council in 1983 (hazard identification; exposure assessment; dose-response assessment; and risk characterization). Exposure assessment focuses on hamburgers, cooked a range of temperatures from rare to well done, the latter typical for fast food restaurants. Features of the model include predictive microbiology components that account for random stochastic growth and death of organisms in hamburger. For dose-response modeling, Shigella data from human feeding studies were used as a surrogate for E. coli O157:H7. Risks were calculated using a threshold model and an alternative nonthreshold model. The 95% probability intervals for risk of illness for product cooked to a given internal temperature spanned five orders of magnitude for these models. The existence of even a small threshold has a dramatic impact on the estimated risk.  相似文献   
49.
闪避作为交际策略的一种,在访谈节目中倍受嘉宾亲睐。在此运用Verschueren的顺应论来探讨访谈类电视节目中,闪避策略的变异性。经过研究发现,在电视访谈节目中,嘉宾主要运用的闪避策略能从以下三个特征来体现顺应论的变异性:改换问题的语境;有意地改变谈话的主题;有意地违返格莱斯的方式准则。在实际的访谈中,三者之间并不是泾渭分明,而是互相交织,它们的交织更多地体现在后两个特征的互相贯穿使用上,而且它们互为因果。  相似文献   
50.
The health‐related damages associated with emissions from coal‐fired power plants can vary greatly across facilities as a function of plant, site, and population characteristics, but the degree of variability and the contributing factors have not been formally evaluated. In this study, we modeled the monetized damages associated with 407 coal‐fired power plants in the United States, focusing on premature mortality from fine particulate matter (PM2.5). We applied a reduced‐form chemistry‐transport model accounting for primary PM2.5 emissions and the influence of sulfur dioxide (SO2) and nitrogen oxide (NOx) emissions on secondary particulate formation. Outputs were linked with a concentration‐response function for PM2.5‐related mortality that incorporated nonlinearities and model uncertainty. We valued mortality with a value of statistical life approach, characterizing and propagating uncertainties in all model elements. At the median of the plant‐specific uncertainty distributions, damages across plants ranged from $30,000 to $500,000 per ton of PM2.5, $6,000 to $50,000 per ton of SO2, $500 to $15,000 per ton of NOx, and $0.02 to $1.57 per kilowatt‐hour of electricity generated. Variability in damages per ton of emissions was almost entirely explained by population exposure per unit emissions (intake fraction), which itself was related to atmospheric conditions and the population size at various distances from the power plant. Variability in damages per kilowatt‐hour was highly correlated with SO2 emissions, related to fuel and control technology characteristics, but was also correlated with atmospheric conditions and population size at various distances. Our findings emphasize that control strategies that consider variability in damages across facilities would yield more efficient outcomes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号