首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5245篇
  免费   162篇
  国内免费   78篇
管理学   1609篇
民族学   2篇
人口学   18篇
丛书文集   119篇
理论方法论   44篇
综合类   1454篇
社会学   63篇
统计学   2176篇
  2024年   13篇
  2023年   33篇
  2022年   97篇
  2021年   109篇
  2020年   133篇
  2019年   173篇
  2018年   180篇
  2017年   255篇
  2016年   189篇
  2015年   176篇
  2014年   250篇
  2013年   866篇
  2012年   463篇
  2011年   266篇
  2010年   193篇
  2009年   241篇
  2008年   268篇
  2007年   256篇
  2006年   226篇
  2005年   229篇
  2004年   192篇
  2003年   123篇
  2002年   93篇
  2001年   93篇
  2000年   62篇
  1999年   59篇
  1998年   54篇
  1997年   29篇
  1996年   29篇
  1995年   23篇
  1994年   23篇
  1993年   12篇
  1992年   18篇
  1991年   10篇
  1990年   4篇
  1989年   6篇
  1988年   8篇
  1987年   1篇
  1986年   4篇
  1985年   4篇
  1984年   7篇
  1983年   2篇
  1982年   5篇
  1981年   3篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1975年   1篇
排序方式: 共有5485条查询结果,搜索用时 640 毫秒
51.
Any continuous bivariate distribution can be expressed in terms of its margins and a unique copula. In the case of extreme‐value distributions, the copula is characterized by a dependence function while each margin depends on three parameters. The authors propose a Bayesian approach for the simultaneous estimation of the dependence function and the parameters defining the margins. They describe a nonparametric model for the dependence function and a reversible jump Markov chain Monte Carlo algorithm for the computation of the Bayesian estimator. They show through simulations that their estimator has a smaller mean integrated squared error than classical nonparametric estimators, especially in small samples. They illustrate their approach on a hydrological data set.  相似文献   
52.
Abstract A model is introduced here for multivariate failure time data arising from heterogenous populations. In particular, we consider a situation in which the failure times of individual subjects are often temporally clustered, so that many failures occur during a relatively short age interval. The clustering is modelled by assuming that the subjects can be divided into ‘internally homogenous’ latent classes, each such class being then described by a time‐dependent frailty profile function. As an example, we reanalysed the dental caries data presented earlier in Härkänen et al. [Scand. J. Statist. 27 (2000) 577], as it turned out that our earlier model could not adequately describe the observed clustering.  相似文献   
53.
Recently, we developed a GIS-Integrated Integral Risk Index (IRI) to assess human health risks in areas with presence of environmental pollutants. Contaminants were previously ranked by applying a self-organizing map (SOM) to their characteristics of persistence, bioaccumulation, and toxicity in order to obtain the Hazard Index (HI). In the present study, the original IRI was substantially improved by allowing the entrance of probabilistic data. A neuroprobabilistic HI was developed by combining SOM and Monte Carlo analysis. In general terms, the deterministic and probabilistic HIs followed a similar pattern: polychlorinated biphenyls (PCBs) and light polycyclic aromatic hydrocarbons (PAHs) were the pollutants showing the highest and lowest values of HI, respectively. However, the bioaccumulation value of heavy metals notably increased after considering a probability density function to explain the bioaccumulation factor. To check its applicability, a case study was investigated. The probabilistic integral risk was calculated in the chemical/petrochemical industrial area of Tarragona (Catalonia, Spain), where an environmental program has been carried out since 2002. The risk change between 2002 and 2005 was evaluated on the basis of probabilistic data of the levels of various pollutants in soils. The results indicated that the risk of the chemicals under study did not follow a homogeneous tendency. However, the current levels of pollution do not mean a relevant source of health risks for the local population. Moreover, the neuroprobabilistic HI seems to be an adequate tool to be taken into account in risk assessment processes.  相似文献   
54.
基于TOPSIS理论的企业供应商选择应用   总被引:2,自引:0,他引:2  
供应链管理已经成为企业获取竞争优势的手段,供应商选择是企业构建供应链体系的重要内容。TOPSIS是一种简单易行的多因素选优的理论方法,它利用熵来确定评价指标的权重,避免了在确定评价指标权重时主观因素的影响,为公司的供应商选择提供了一种科学、量化、高效的手段。文章回顾了供应商选择的理论现状,介绍了TOPSIS理论模型和利用它进行供应商选择的步骤,通过对典型企业的深入调研,阐述了该企业如何应用TOPSIS进行供应商选择。  相似文献   
55.
企业附加价值链表征企业文化的浓缩 ,是企业发展的生命线。企业附加价值链清晰、完备与否 ,是剖析企业存在、发展的根本之所在 ,也是考查、评价企业现状的基本依据。  相似文献   
56.
作为邓小平理论萌芽的标志不是某一具体的时间、论著、观点。对邓小平理论萌芽应作出系统的、全面的科学界定。邓小平理论萌芽的标志应该是时间段、论著链、观点网  相似文献   
57.
Abstract.  A Markov property associates a set of conditional independencies to a graph. Two alternative Markov properties are available for chain graphs (CGs), the Lauritzen–Wermuth–Frydenberg (LWF) and the Andersson–Madigan– Perlman (AMP) Markov properties, which are different in general but coincide for the subclass of CGs with no flags . Markov equivalence induces a partition of the class of CGs into equivalence classes and every equivalence class contains a, possibly empty, subclass of CGs with no flags itself containing a, possibly empty, subclass of directed acyclic graphs (DAGs). LWF-Markov equivalence classes of CGs can be naturally characterized by means of the so-called largest CGs , whereas a graphical characterization of equivalence classes of DAGs is provided by the essential graphs . In this paper, we show the existence of largest CGs with no flags that provide a natural characterization of equivalence classes of CGs of this kind, with respect to both the LWF- and the AMP-Markov properties. We propose a procedure for the construction of the largest CGs, the largest CGs with no flags and the essential graphs, thereby providing a unified approach to the problem. As by-products we obtain a characterization of graphs that are largest CGs with no flags and an alternative characterization of graphs which are largest CGs. Furthermore, a known characterization of the essential graphs is shown to be a special case of our more general framework. The three graphical characterizations have a common structure: they use two versions of a locally verifiable graphical rule. Moreover, in case of DAGs, an immediate comparison of three characterizing graphs is possible.  相似文献   
58.
This paper develops a likelihood‐based method for fitting additive models in the presence of measurement error. It formulates the additive model using the linear mixed model representation of penalized splines. In the presence of a structural measurement error model, the resulting likelihood involves intractable integrals, and a Monte Carlo expectation maximization strategy is developed for obtaining estimates. The method's performance is illustrated with a simulation study.  相似文献   
59.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   
60.
Some statistical models defined in terms of a generating stochastic mechanism have intractable distribution theory, which renders parameter estimation difficult. However, a Monte Carlo estimate of the log-likelihood surface for such a model can be obtained via computation of nonparametric density estimates from simulated realizations of the model. Unfortunately, the bias inherent in density estimation can cause bias in the resulting log-likelihood estimate that alters the location of its maximizer. In this paper a methodology for radically reducing this bias is developed for models with an additive error component. An illustrative example involving a stochastic model of molecular fragmentation and measurement is given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号