首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1421篇
  免费   40篇
  国内免费   18篇
管理学   194篇
民族学   1篇
人口学   22篇
丛书文集   19篇
理论方法论   77篇
综合类   220篇
社会学   9篇
统计学   937篇
  2023年   6篇
  2022年   8篇
  2021年   10篇
  2020年   32篇
  2019年   46篇
  2018年   48篇
  2017年   87篇
  2016年   37篇
  2015年   44篇
  2014年   39篇
  2013年   354篇
  2012年   116篇
  2011年   37篇
  2010年   45篇
  2009年   42篇
  2008年   54篇
  2007年   58篇
  2006年   55篇
  2005年   37篇
  2004年   28篇
  2003年   33篇
  2002年   35篇
  2001年   27篇
  2000年   18篇
  1999年   19篇
  1998年   12篇
  1997年   14篇
  1996年   12篇
  1995年   12篇
  1994年   15篇
  1993年   13篇
  1992年   18篇
  1991年   10篇
  1990年   4篇
  1989年   5篇
  1988年   10篇
  1987年   6篇
  1986年   5篇
  1985年   6篇
  1984年   5篇
  1983年   3篇
  1982年   2篇
  1981年   4篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有1479条查询结果,搜索用时 0 毫秒
1.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
2.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is coherent, in de Finetti's sense. Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”.  相似文献   
3.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   
4.
In this article we provide a rigorous treatment of one of the central statistical issues of credit risk management. GivenK-1 rating categories, the rating of a corporate bond over a certain horizon may either stay the same or change to one of the remainingK-2 categories; in addition, it is usually the case that the rating of some bonds is withdrawn during the time interval considered in the analysis. When estimating transition probabilities, we have thus to consider aK-th category, called withdrawal, which contains (partially) missing data. We show how maximum likelihood estimation can be performed in this setup; whereas in discrete time our solution gives rigorous support to a solution often used in applications, in continuous time the maximum likelihood estimator of the transition matrix computed by means of the EM algorithm represents a significant improvement over existing methods.  相似文献   
5.
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution. For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance levels closer to the nominal level compared with the standard procedure.  相似文献   
6.
应用结构模式识别的方法建立多阶均线形态组合序列通过对股市历史数据的变换和分析考查各形态组合序列对应的盈利和亏损概率建立新的证券投资分析方法即均线形态组合预测法其优点是可以部分弥补传统技术分析中的滞后性和不确定性能根据市场的变化及时调整相应的参数指标有效提高预测的成功率并易用计算机作出识别  相似文献   
7.
This note exhibits two independent random variables on integers, X1 and X2, such that neither X1 nor X2 has a generalized Poisson distribution, but X1 + X2 has. This contradicts statements made by Professor Consul in his recent book.  相似文献   
8.
Various authors, given k location parameters, have considered lower confidence bounds on (standardized) dserences between the largest and each of the other k - 1 parameters. They have then used these bounds to put lower confidence bounds on the probability of correct selection (PCS) in the same experiment (as was used for finding the lower bounds on differences). It is pointed out that this is an inappropriate inference procedure. Moreover, if the PCS refers to some later experiment it is shown that if a non-trivial confidence bound is possible then it is already possible to conclude, with greater confidence, that correct selection has occurred in the first experiment. The short answer to the question in the title is therefore ‘No’, but this should be qualified in the case of a Bayesian analysis.  相似文献   
9.
The benchmark dose (BMD) is an exposure level that would induce a small risk increase (BMR level) above the background. The BMD approach to deriving a reference dose for risk assessment of noncancer effects is advantageous in that the estimate of BMD is not restricted to experimental doses and utilizes most available dose-response information. To quantify statistical uncertainty of a BMD estimate, we often calculate and report its lower confidence limit (i.e., BMDL), and may even consider it as a more conservative alternative to BMD itself. Computation of BMDL may involve normal confidence limits to BMD in conjunction with the delta method. Therefore, factors, such as small sample size and nonlinearity in model parameters, can affect the performance of the delta method BMDL, and alternative methods are useful. In this article, we propose a bootstrap method to estimate BMDL utilizing a scheme that consists of a resampling of residuals after model fitting and a one-step formula for parameter estimation. We illustrate the method with clustered binary data from developmental toxicity experiments. Our analysis shows that with moderately elevated dose-response data, the distribution of BMD estimator tends to be left-skewed and bootstrap BMDL s are smaller than the delta method BMDL s on average, hence quantifying risk more conservatively. Statistically, the bootstrap BMDL quantifies the uncertainty of the true BMD more honestly than the delta method BMDL as its coverage probability is closer to the nominal level than that of delta method BMDL. We find that BMD and BMDL estimates are generally insensitive to model choices provided that the models fit the data comparably well near the region of BMD. Our analysis also suggests that, in the presence of a significant and moderately strong dose-response relationship, the developmental toxicity experiments under the standard protocol support dose-response assessment at 5% BMR for BMD and 95% confidence level for BMDL.  相似文献   
10.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号