首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   485篇
  免费   10篇
管理学   43篇
民族学   1篇
人口学   41篇
丛书文集   2篇
理论方法论   40篇
综合类   2篇
社会学   284篇
统计学   82篇
  2023年   2篇
  2020年   8篇
  2019年   17篇
  2018年   16篇
  2017年   17篇
  2016年   15篇
  2015年   13篇
  2014年   9篇
  2013年   76篇
  2012年   9篇
  2011年   20篇
  2010年   12篇
  2009年   20篇
  2008年   7篇
  2007年   14篇
  2006年   17篇
  2005年   10篇
  2004年   10篇
  2003年   9篇
  2002年   17篇
  2001年   14篇
  2000年   19篇
  1999年   10篇
  1998年   14篇
  1997年   14篇
  1996年   14篇
  1995年   3篇
  1994年   5篇
  1993年   6篇
  1992年   13篇
  1991年   6篇
  1990年   8篇
  1987年   7篇
  1986年   3篇
  1985年   1篇
  1984年   4篇
  1983年   11篇
  1982年   3篇
  1981年   1篇
  1980年   1篇
  1979年   3篇
  1978年   2篇
  1977年   3篇
  1975年   3篇
  1973年   2篇
  1970年   1篇
  1969年   1篇
  1967年   1篇
  1966年   1篇
  1964年   1篇
排序方式: 共有495条查询结果,搜索用时 15 毫秒
1.
With its roots in American pragmatism, symbolic interactionism has created a distinctive perspective and produced numerous important contributions and now offers significant prospects for the future. In this article, I review my intellectual journey with this perspective over forty years. This journey was initiated within the American society, sociology, and symbolic interaction of circa 1960. I note many of the contributions made by interactionists since that time, with particular focus on those who have contributed to the study of social organization and social process. I offer an agenda for the future based on currently underdeveloped areas that have potential. These are inequality orders, institutional analysis, collective action across space and time, and the integration of temporal and spatial orders. The article concludes with calls for further efforts at cross‐perspective dialogues, more attention to feminist scholars, and an elaborated critical pragmatism.  相似文献   
2.
3.
Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to 'tilt' away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimizes a measure of the new distribution's dispersion. This theoretical proposal has a simple empirical version, which results in each data value being assigned a weight according to an assessment of its influence on dispersion. Importantly, distance can be measured directly in terms of the likely level of contamination, without reference to an empirical measure of scale. This makes the procedure particularly attractive for use in multivariate problems. It has several forms, depending on the definitions taken for dispersion and for distance between distributions. Examples of dispersion measures include variance and generalizations based on high order moments. Practicable measures of the distance between distributions may be based on power divergence, which includes Hellinger and Kullback–Leibler distances. The resulting location estimator has a smooth, redescending influence curve and appears to avoid computational difficulties that are typically associated with redescending estimators. Its breakdown point can be located at any desired value ε∈ (0, ½) simply by 'trimming' to a known distance (depending only on ε and the choice of distance measure) from the empirical distribution. The estimator has an affine equivariant multivariate form. Further, the general method is applicable to a range of statistical problems, including regression.  相似文献   
4.
Methods are suggested for improving the coverage accuracy of intervals for predicting future values of a random variable drawn from a sampled distribution. It is shown that properties of solutions to such problems may be quite unexpected. For example, the bootstrap and the jackknife perform very poorly when used to calibrate coverage, although the jackknife estimator of the true coverage is virtually unbiased. A version of the smoothed bootstrap can be employed for successful calibration, however. Interpolation among adjacent order statistics can also be an effective way of calibrating, although even there the results are unexpected. In particular, whereas the coverage error can be reduced from O ( n -1) to orders O ( n -2) and O ( n -3) (where n denotes the sample size) by interpolating among two and three order statistics respectively, the next two orders of reduction require interpolation among five and eight order statistics respectively.  相似文献   
5.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed.  相似文献   
6.
Summary.  We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.  相似文献   
7.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
8.
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct.  相似文献   
9.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
10.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号