首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1840篇
  免费   47篇
  国内免费   1篇
管理学   227篇
民族学   11篇
人口学   177篇
丛书文集   6篇
理论方法论   128篇
综合类   32篇
社会学   926篇
统计学   381篇
  2023年   17篇
  2021年   16篇
  2020年   27篇
  2019年   55篇
  2018年   45篇
  2017年   74篇
  2016年   59篇
  2015年   43篇
  2014年   57篇
  2013年   382篇
  2012年   77篇
  2011年   58篇
  2010年   51篇
  2009年   47篇
  2008年   42篇
  2007年   57篇
  2006年   50篇
  2005年   36篇
  2004年   40篇
  2003年   51篇
  2002年   35篇
  2001年   33篇
  2000年   30篇
  1999年   28篇
  1998年   23篇
  1997年   19篇
  1996年   13篇
  1995年   16篇
  1994年   27篇
  1993年   18篇
  1992年   26篇
  1991年   11篇
  1990年   19篇
  1989年   20篇
  1988年   25篇
  1987年   20篇
  1986年   21篇
  1985年   12篇
  1984年   21篇
  1983年   21篇
  1982年   22篇
  1981年   20篇
  1980年   18篇
  1979年   13篇
  1978年   14篇
  1977年   9篇
  1976年   14篇
  1975年   7篇
  1973年   7篇
  1972年   7篇
排序方式: 共有1888条查询结果,搜索用时 15 毫秒
991.
This article investigates the number of games of baseball that should be played (1) in a World Series competition, and (2) in a pennant race competition within each league, in order to have a reasonable level of confidence that the best team wins the competition. The current number of games played is found to be highly inadequate for the World Series and only barely sufficient for the pennant race.  相似文献   
992.
Suppose the same nonlinear function involving k parameters is fit to each of t populations. Suppose further it is of interest to compare a specific parameter of the models across the populations. Such comparisons can be expressed as linear hypotheses about the parameters of the nonlinear models. A weighted linear least squares (WLLS) procedure is proposed to test these linear hypotheses. The advantages and disadvantages of the WLLS procedure are discussed. This procedure is also compared to a nonlinear least squares procedure for testing these hypotheses in nonlinear models.  相似文献   
993.
In this paper we present relatively simple (ruler, paper, and pencil) nonparametric procedures for constructing joint confidence regions for (i) the median and the inner quartile range for the symmetric one-sample problem and (ii) the shift and ratio of scale parameters for the two-sample case. Both procedures are functions of the sample quartiles and have exact confidence levels when the populations are continuous. The one-sample case requires symmetry of first and third quartiles about the median.

The confidence regions we propose are always convex, nested for decreasing confidence levels and are compact for reasonably large sample sizes. Both exact small sample and approximate large sample distributions are given.  相似文献   
994.
This paper considers the problem of choosing one between the simple model N(0, Id) and the full model N(0 Id) based on the observation X from N(θ Id) where X, θεRd, 0 is the null vector in Rd and Id is the d×d identity matrix. It is shown that the selection rule which chooses the full model if |x| > ao , for some a0 > 0 and the simple model otherwise is an admissible minimax model selection rule relative to a loss function which takes into account both inaccuracy and complexity.  相似文献   
995.
996.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   
997.
When missing data occur in studies designed to compare the accuracy of diagnostic tests, a common, though naive, practice is to base the comparison of sensitivity, specificity, as well as of positive and negative predictive values on some subset of the data that fits into methods implemented in standard statistical packages. Such methods are usually valid only under the strong missing completely at random (MCAR) assumption and may generate biased and less precise estimates. We review some models that use the dependence structure of the completely observed cases to incorporate the information of the partially categorized observations into the analysis and show how they may be fitted via a two-stage hybrid process involving maximum likelihood in the first stage and weighted least squares in the second. We indicate how computational subroutines written in R may be used to fit the proposed models and illustrate the different analysis strategies with observational data collected to compare the accuracy of three distinct non-invasive diagnostic methods for endometriosis. The results indicate that even when the MCAR assumption is plausible, the naive partial analyses should be avoided.  相似文献   
998.
We consider the use of emulator technology as an alternative method to second-order Monte Carlo (2DMC) in the uncertainty analysis for a percentile from the output of a stochastic model. 2DMC is a technique that uses repeated sampling in order to make inferences on the uncertainty and variability in a model output. The conventional 2DMC approach can often be highly computational, making methods for uncertainty and sensitivity analysis unfeasible. We explore the adequacy and efficiency of the emulation approach, and we find that emulation provides a viable alternative in this situation. We demonstrate these methods using two different examples of different input dimensions, including an application that considers contamination in pre-pasteurised milk.  相似文献   
999.
Stakes and chips     
Gambling has provided centuries of inspiration to probabilists and statisticians. The process continues. There also exist fundamental links between betting and a newer subject, Information Theory, which began with Claude Shannon and his ground-breaking 1948 paper A Mathematical Theory of Communication.
It is a result which arises very naturally. A successful gambler and a successful data compression algorithm must both accurately estimate probability distributions. There are practical results as well as theoretical: Shannon and colleagues, including the legendary gambler and mathematician Edward Thorp, actually attempted to apply results from information theory in Las Vegas casinos and stock market transactions. Oliver Johnson examines the connections.  相似文献   
1000.
As the potential for more children being raised by single parents increases, so does the societal need to examine this phenomena of single parent earnings and the impact it will have on the ability to support a family above the poverty line. Research suggests a substantial pay gap between men and women, but most research is limited to individuals in traditional families. This study explores income disparity and poverty between single mothers and single fathers across three decades (1990–2010), using a US nationally representative sample. Based on human capital theory, our analysis reveals that single mothers were more likely to be in poverty at far greater rates than single fathers, after controlling for a host of demographic, human capital, and work related variables. We also found that a contributing factor to this disparity is that single mothers were penalized for having more children while single fathers were not. We find that gendered poverty and the gender pay gap narrowed between 1990 and 2000, but have stayed stable since. Overall, human capital decreases the gender income and poverty gap, but a substantial gap still remains. Implications for policy-makers are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号