首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2296篇
  免费   99篇
  国内免费   1篇
管理学   75篇
民族学   7篇
人口学   40篇
丛书文集   40篇
理论方法论   92篇
综合类   341篇
社会学   72篇
统计学   1729篇
  2023年   8篇
  2022年   7篇
  2021年   21篇
  2020年   24篇
  2019年   69篇
  2018年   68篇
  2017年   88篇
  2016年   59篇
  2015年   58篇
  2014年   89篇
  2013年   657篇
  2012年   158篇
  2011年   89篇
  2010年   73篇
  2009年   119篇
  2008年   119篇
  2007年   96篇
  2006年   50篇
  2005年   52篇
  2004年   57篇
  2003年   46篇
  2002年   40篇
  2001年   40篇
  2000年   26篇
  1999年   27篇
  1998年   22篇
  1997年   19篇
  1996年   14篇
  1995年   26篇
  1994年   17篇
  1993年   3篇
  1992年   6篇
  1991年   3篇
  1990年   4篇
  1989年   5篇
  1988年   4篇
  1987年   1篇
  1986年   2篇
  1985年   9篇
  1984年   30篇
  1983年   23篇
  1982年   17篇
  1981年   9篇
  1980年   11篇
  1979年   8篇
  1978年   18篇
  1976年   3篇
  1975年   2篇
排序方式: 共有2396条查询结果,搜索用时 31 毫秒
1.
Abstract

Characterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included.  相似文献   
2.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
3.
The conditional tail expectation (CTE) is an indicator of tail behavior that takes into account both the frequency and magnitude of a tail event. However, the asymptotic normality of its empirical estimator requires that the underlying distribution possess a finite variance; this can be a strong restriction in actuarial and financial applications. A valuable alternative is the median shortfall (MS), although it only gives information about the frequency of a tail event. We construct a class of tail Lp-medians encompassing the MS and CTE. For p in (1,2), a tail Lp-median depends on both the frequency and magnitude of tail events, and its empirical estimator is, within the range of the data, asymptotically normal under a condition weaker than a finite variance. We extrapolate this estimator and another technique to extreme levels using the heavy-tailed framework. The estimators are showcased on a simulation study and on real fire insurance data.  相似文献   
4.
Modelling daily multivariate pollutant data at multiple sites   总被引:7,自引:1,他引:6  
Summary. This paper considers the spatiotemporal modelling of four pollutants measured daily at eight monitoring sites in London over a 4-year period. Such multiple-pollutant data sets measured over time at multiple sites within a region of interest are typical. Here, the modelling was carried out to provide the exposure for a study investigating the health effects of air pollution. Alternative objectives include the design problem of the positioning of a new monitoring site, or for regulatory purposes to determine whether environmental standards are being met. In general, analyses are hampered by missing data due, for example, to a particular pollutant not being measured at a site, a monitor being inactive by design (e.g. a 6-day monitoring schedule) or because of an unreliable or faulty monitor. Data of this type are modelled here within a dynamic linear modelling framework, in which the dependences across time, space and pollutants are exploited. Throughout the approach is Bayesian, with implementation via Markov chain Monte Carlo sampling.  相似文献   
5.
It is well-known that, under Type II double censoring, the maximum likelihood (ML) estimators of the location and scale parameters, θ and δ, of a twoparameter exponential distribution are linear functions of the order statistics. In contrast, when θ is known, theML estimator of δ does not admit a closed form expression. It is shown, however, that theML estimator of the scale parameter exists and is unique. Moreover, it has good large-sample properties. In addition, sharp lower and upper bounds for this estimator are provided, which can serve as starting points for iterative interpolation methods such as regula falsi. Explicit expressions for the expected Fisher information and Cramér-Rao lower bound are also derived. In the Bayesian context, assuming an inverted gamma prior on δ, the uniqueness, boundedness and asymptotics of the highest posterior density estimator of δ can be deduced in a similar way. Finally, an illustrative example is included.  相似文献   
6.
非统计专业统计学教育刍议   总被引:1,自引:0,他引:1  
针对非统计专业统计学教育目前存在的问题,阐述了提高对统计学重要性的认识是保证统计教学效果的前提;统计学教学环节的改革是提高教学效果的重要保证;统计教师的业务素质是提高统计教学效果的关键。  相似文献   
7.
In this paper we propose a new robust estimator in the context of two-stage estimation methods directed towards the correction of endogeneity problems in linear models. Our estimator is a combination of Huber estimators for each of the two stages, with scale corrections implemented using preliminary median absolute deviation estimators. In this way we obtain a two-stage estimation procedure that is an interesting compromise between concerns of simplicity of calculation, robustness and efficiency. This method compares well with other possible estimators such as two-stage least-squares (2SLS) and two-stage least-absolute-deviations (2SLAD), asymptotically and in finite samples. It is notably interesting to deal with contamination affecting more heavily the distribution tails than a few outliers and not losing as much efficiency as other popular estimators in that case, e.g. under normality. An additional originality resides in the fact that we deal with random regressors and asymmetric errors, which is not often the case in the literature on robust estimators.  相似文献   
8.
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem.  相似文献   
9.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example.  相似文献   
10.
The phenotype of a quantitative trait locus (QTL) is often modeled by a finite mixture of normal distributions. If the QTL effect depends on the number of copies of a specific allele one carries, then the mixture model has three components. In this case, the mixing proportions have a binomial structure according to the Hardy–Weinberg equilibrium. In the search for QTL, a significance test of homogeneity against the Hardy–Weinberg normal mixture model alternative is an important first step. The LOD score method, a likelihood ratio test used in genetics, is a favored choice. However, there is not yet a general theory for the limiting distribution of the likelihood ratio statistic in the presence of unknown variance. This paper derives the limiting distribution of the likelihood ratio statistic, which can be described by the supremum of a quadratic form of a Gaussian process. Further, the result implies that the distribution of the modified likelihood ratio statistic is well approximated by a chi-squared distribution. Simulation results show that the approximation has satisfactory precision for the cases considered. We also give a real-data example.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号