首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   926篇
  免费   27篇
  国内免费   4篇
管理学   71篇
民族学   1篇
人才学   1篇
人口学   2篇
丛书文集   7篇
理论方法论   11篇
综合类   157篇
社会学   19篇
统计学   688篇
  2024年   1篇
  2023年   5篇
  2022年   10篇
  2021年   7篇
  2020年   22篇
  2019年   49篇
  2018年   32篇
  2017年   60篇
  2016年   28篇
  2015年   30篇
  2014年   35篇
  2013年   211篇
  2012年   120篇
  2011年   30篇
  2010年   23篇
  2009年   27篇
  2008年   34篇
  2007年   24篇
  2006年   12篇
  2005年   28篇
  2004年   23篇
  2003年   18篇
  2002年   19篇
  2001年   21篇
  2000年   13篇
  1999年   10篇
  1998年   13篇
  1997年   10篇
  1996年   7篇
  1995年   8篇
  1994年   8篇
  1993年   3篇
  1992年   4篇
  1991年   2篇
  1990年   3篇
  1989年   2篇
  1988年   2篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
排序方式: 共有957条查询结果,搜索用时 15 毫秒
1.
针对决策信息为毕达哥拉斯模糊集的多属性决策问题,提出了一种基于混合加权测度的TOPSIS决策方法。在分析了现有距离测度方法不足的基础上,首先给出了一种新的毕达哥拉斯模糊距离测度——毕达哥拉斯模糊有序加权距离(PFOWD),并研究了该测度权重的确定方法;在PFOWD基础上,进一步提出了毕达哥拉斯模糊混合加权距离(PFHWD),同时探讨了其特征和与现有毕达哥拉斯模糊测度的关系;最后提出了一种基于PFHWD测度的毕达哥拉斯模糊TOPSIS多属性方法,并用实例验证了方法的有效性和可行性。  相似文献   
2.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
3.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
4.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
5.
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process.  相似文献   
6.
Asthma patients' health status may be especially sensitive to some types of air pollution, but the evidence on this is mixed. We explore the effects of ground-level ozone on asthma patient's activities, breaking apart the usual aggregated category of leisure into indoor and outdoor activities, and differentiating those by whether the activities were active or inactive. Applying the semiparametric censored estimation method we demonstrate that even though the period over which activities were observed was relatively low in ozone levels, there is a significant impact of ozone on a few activities. The (non-ozone) economic and demographic variables in the model play significant roles in explaining the allocation of time among seven activities, suggesting the suitability of the approach for other household decision-making contexts.  相似文献   
7.
Abstract A model is introduced here for multivariate failure time data arising from heterogenous populations. In particular, we consider a situation in which the failure times of individual subjects are often temporally clustered, so that many failures occur during a relatively short age interval. The clustering is modelled by assuming that the subjects can be divided into ‘internally homogenous’ latent classes, each such class being then described by a time‐dependent frailty profile function. As an example, we reanalysed the dental caries data presented earlier in Härkänen et al. [Scand. J. Statist. 27 (2000) 577], as it turned out that our earlier model could not adequately describe the observed clustering.  相似文献   
8.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   
9.
晏妮娜  黄小原  朱宏 《管理学报》2006,3(5):524-528
在电子市场环境下,考虑了需求、市场价格和市场准入程度的随机性,基于Stack-erlberg主从对策,建立了供应链期权合同协调的随机期望值模型。在这一主从对策模型中,主方供应商的目标函数是预期利润,决策变量是期权合同预订费用和执行费用;从方分销商的目标函数是预期利润,决策变量是订货量。应用包括随机模拟、人工神经元网络和遗传算法组成的混合智能算法求解该主从对策问题。最后,结合上海宝钢集团益昌公司电子商务的运作实例,运用混合智能算法进行了仿真计算与分析。  相似文献   
10.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号