首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2999篇
  免费   76篇
  国内免费   19篇
管理学   259篇
民族学   3篇
人口学   35篇
丛书文集   28篇
理论方法论   84篇
综合类   307篇
社会学   56篇
统计学   2322篇
  2023年   13篇
  2022年   13篇
  2021年   19篇
  2020年   55篇
  2019年   114篇
  2018年   130篇
  2017年   219篇
  2016年   75篇
  2015年   78篇
  2014年   89篇
  2013年   795篇
  2012年   243篇
  2011年   75篇
  2010年   87篇
  2009年   81篇
  2008年   106篇
  2007年   107篇
  2006年   98篇
  2005年   73篇
  2004年   64篇
  2003年   64篇
  2002年   69篇
  2001年   54篇
  2000年   45篇
  1999年   50篇
  1998年   31篇
  1997年   27篇
  1996年   23篇
  1995年   18篇
  1994年   19篇
  1993年   18篇
  1992年   25篇
  1991年   18篇
  1990年   8篇
  1989年   6篇
  1988年   15篇
  1987年   7篇
  1986年   8篇
  1985年   12篇
  1984年   8篇
  1983年   10篇
  1982年   6篇
  1981年   6篇
  1980年   5篇
  1979年   3篇
  1978年   1篇
  1977年   3篇
  1975年   1篇
排序方式: 共有3094条查询结果,搜索用时 687 毫秒
11.
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given.  相似文献   
12.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example.  相似文献   
13.
论科技期刊的品牌资本   总被引:1,自引:0,他引:1  
科技期刊的品牌资本是其生存和发展的关键因素,品牌资本体现了社会效益与经济效益的同一性。品牌资本的价值回归是一个缓慢但却是相当稳定的过程。通过抽样调查和方差分析,定量说明了上述论点的正确性。  相似文献   
14.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   
15.
Summary Weak disintegrations are investigated from various points of view. Kolmogorov's definition of conditional probability is critically analysed, and it is noted how the notion of disintegrability plays some role in connecting Kolmogorov's definition with the one given in line with de Finetti's coherence principle. Conditions are given, on the domain of a prevision, implying the equivalence between weak disintegrability and conglomerability. Moreover, weak sintegrations are characterized in terms of coherence, in de Finetti's sense, of, a suitable function. This fact enables us to give, an interpretation of weak disintegrability as a form of “preservation of coherence”. The previous results are also applied to a hypothetical inferential problem. In particular, an inference is shown to be coherent, in the sense of Heath and Sudderth, if and only if a suitable function is coherent, in de Finetti's sense. Research partially supported by: M.U.R.S.T. 40% “Problemi di inferenza pura”.  相似文献   
16.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.  相似文献   
17.
Summary.  In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss.  相似文献   
18.
19.
In this article we provide a rigorous treatment of one of the central statistical issues of credit risk management. GivenK-1 rating categories, the rating of a corporate bond over a certain horizon may either stay the same or change to one of the remainingK-2 categories; in addition, it is usually the case that the rating of some bonds is withdrawn during the time interval considered in the analysis. When estimating transition probabilities, we have thus to consider aK-th category, called withdrawal, which contains (partially) missing data. We show how maximum likelihood estimation can be performed in this setup; whereas in discrete time our solution gives rigorous support to a solution often used in applications, in continuous time the maximum likelihood estimator of the transition matrix computed by means of the EM algorithm represents a significant improvement over existing methods.  相似文献   
20.
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution. For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance levels closer to the nominal level compared with the standard procedure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号