首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9145篇
  免费   473篇
  国内免费   118篇
管理学   2848篇
民族学   7篇
人口学   94篇
丛书文集   269篇
理论方法论   413篇
综合类   2805篇
社会学   487篇
统计学   2813篇
  2024年   14篇
  2023年   137篇
  2022年   101篇
  2021年   124篇
  2020年   287篇
  2019年   309篇
  2018年   335篇
  2017年   423篇
  2016年   321篇
  2015年   320篇
  2014年   461篇
  2013年   1486篇
  2012年   629篇
  2011年   477篇
  2010年   403篇
  2009年   380篇
  2008年   393篇
  2007年   392篇
  2006年   393篇
  2005年   395篇
  2004年   322篇
  2003年   274篇
  2002年   227篇
  2001年   206篇
  2000年   115篇
  1999年   113篇
  1998年   88篇
  1997年   66篇
  1996年   50篇
  1995年   46篇
  1994年   59篇
  1993年   44篇
  1992年   34篇
  1991年   33篇
  1990年   42篇
  1989年   25篇
  1988年   40篇
  1987年   25篇
  1986年   20篇
  1985年   23篇
  1984年   25篇
  1983年   29篇
  1982年   19篇
  1981年   17篇
  1980年   4篇
  1979年   3篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有9736条查询结果,搜索用时 15 毫秒
81.
Summary. We propose a simple estimation procedure for a proportional hazards frailty regression model for clustered survival data in which the dependence is generated by a positive stable distribution. Inferences for the frailty parameter can be obtained by using output from Cox regression analyses. The computational burden is substantially less than that of the other approaches to estimation. The large sample behaviour of the estimator is studied and simulations show that the approximations are appropriate for use with realistic sample sizes. The methods are motivated by studies of familial associations in the natural history of diseases. Their practical utility is illustrated with sib pair data from Beaver Dam, Wisconsin.  相似文献   
82.
The sampling designs dependent on sample moments of auxiliary variables are well known. Lahiri (Bull Int Stat Inst 33:133–140, 1951) considered a sampling design proportionate to a sample mean of an auxiliary variable. Sing and Srivastava (Biometrika 67(1):205–209, 1980) proposed the sampling design proportionate to a sample variance while Wywiał (J Indian Stat Assoc 37:73–87, 1999) a sampling design proportionate to a sample generalized variance of auxiliary variables. Some other sampling designs dependent on moments of an auxiliary variable were considered e.g. in Wywiał (Some contributions to multivariate methods in, survey sampling. Katowice University of Economics, Katowice, 2003a); Stat Transit 4(5):779–798, 2000) where accuracy of some sampling strategies were compared, too.These sampling designs cannot be useful in the case when there are some censored observations of the auxiliary variable. Moreover, they can be much too sensitive to outliers observations. In these cases the sampling design proportionate to the order statistic of an auxiliary variable can be more useful. That is why such an unequal probability sampling design is proposed here. Its particular cases as well as its conditional version are considered, too. The sampling scheme implementing this sampling design is proposed. The inclusion probabilities of the first and second orders were evaluated. The well known Horvitz–Thompson estimator is taken into account. A ratio estimator dependent on an order statistic is constructed. It is similar to the well known ratio estimator based on the population and sample means. Moreover, it is an unbiased estimator of the population mean when the sample is drawn according to the proposed sampling design dependent on the appropriate order statistic.  相似文献   
83.
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged.  相似文献   
84.
Any continuous bivariate distribution can be expressed in terms of its margins and a unique copula. In the case of extreme‐value distributions, the copula is characterized by a dependence function while each margin depends on three parameters. The authors propose a Bayesian approach for the simultaneous estimation of the dependence function and the parameters defining the margins. They describe a nonparametric model for the dependence function and a reversible jump Markov chain Monte Carlo algorithm for the computation of the Bayesian estimator. They show through simulations that their estimator has a smaller mean integrated squared error than classical nonparametric estimators, especially in small samples. They illustrate their approach on a hydrological data set.  相似文献   
85.
To ascertain the viability of a project, undertake resource allocation, take part in bidding processes, and other related decisions, modern project management requires forecasting techniques for cost, duration, and performance of a project, not only under normal circumstances, but also under external events that might abruptly change the status quo. We provide a Bayesian framework that provides a global forecast of a project's performance. We aim at predicting the probabilities and impacts of a set of potential scenarios caused by combinations of disruptive events, and using this information to deal with project management issues. To introduce the methodology, we focus on a project's cost, but the ideas equally apply to project duration or performance forecasting. We illustrate our approach with an example based on a real case study involving estimation of the uncertainty in project cost while bidding for a contract.  相似文献   
86.
在分析银行住房抵押贷款的各种风险因素的基础之上,提出模糊预警模型,给出预警结果。  相似文献   
87.
Recently, we developed a GIS-Integrated Integral Risk Index (IRI) to assess human health risks in areas with presence of environmental pollutants. Contaminants were previously ranked by applying a self-organizing map (SOM) to their characteristics of persistence, bioaccumulation, and toxicity in order to obtain the Hazard Index (HI). In the present study, the original IRI was substantially improved by allowing the entrance of probabilistic data. A neuroprobabilistic HI was developed by combining SOM and Monte Carlo analysis. In general terms, the deterministic and probabilistic HIs followed a similar pattern: polychlorinated biphenyls (PCBs) and light polycyclic aromatic hydrocarbons (PAHs) were the pollutants showing the highest and lowest values of HI, respectively. However, the bioaccumulation value of heavy metals notably increased after considering a probability density function to explain the bioaccumulation factor. To check its applicability, a case study was investigated. The probabilistic integral risk was calculated in the chemical/petrochemical industrial area of Tarragona (Catalonia, Spain), where an environmental program has been carried out since 2002. The risk change between 2002 and 2005 was evaluated on the basis of probabilistic data of the levels of various pollutants in soils. The results indicated that the risk of the chemicals under study did not follow a homogeneous tendency. However, the current levels of pollution do not mean a relevant source of health risks for the local population. Moreover, the neuroprobabilistic HI seems to be an adequate tool to be taken into account in risk assessment processes.  相似文献   
88.
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI‐L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk‐reducing effectiveness of WHTI‐L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI‐L, and a range of casualty cost estimates based on the willingness‐to‐pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI‐L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14–26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5–6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit‐cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.  相似文献   
89.
This paper investigates how individuals evaluate delayed outcomes with risky realization times. Under the discounted expected utility (DEU) model, such evaluations depend only on intertemporal preferences. We obtain several testable hypotheses using the DEU model as a benchmark and test these hypotheses in three experiments. In general, our results show that the DEU model is a poor predictor of intertemporal choice behavior under timing risk. We found that individuals are averse to timing risk and that they evaluate timing lotteries in a rank-dependent fashion. The main driver of timing risk aversion is nothing but probabilistic risk aversion that stems from the nonlinear treatment of probabilities.  相似文献   
90.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号