首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   874篇
  免费   30篇
管理学   122篇
民族学   6篇
人口学   95篇
丛书文集   10篇
理论方法论   77篇
综合类   14篇
社会学   424篇
统计学   156篇
  2021年   3篇
  2020年   10篇
  2019年   12篇
  2018年   20篇
  2017年   15篇
  2016年   16篇
  2015年   22篇
  2014年   21篇
  2013年   125篇
  2012年   32篇
  2011年   28篇
  2010年   24篇
  2009年   24篇
  2008年   35篇
  2007年   32篇
  2006年   31篇
  2005年   32篇
  2004年   41篇
  2003年   18篇
  2002年   24篇
  2001年   24篇
  2000年   20篇
  1999年   21篇
  1998年   12篇
  1997年   14篇
  1996年   18篇
  1995年   9篇
  1994年   15篇
  1993年   7篇
  1992年   15篇
  1991年   18篇
  1990年   17篇
  1989年   13篇
  1988年   12篇
  1987年   5篇
  1986年   8篇
  1985年   7篇
  1984年   17篇
  1983年   12篇
  1982年   12篇
  1981年   11篇
  1980年   6篇
  1979年   7篇
  1978年   7篇
  1977年   3篇
  1976年   4篇
  1975年   4篇
  1973年   6篇
  1972年   2篇
  1967年   2篇
排序方式: 共有904条查询结果,搜索用时 156 毫秒
131.
Mosteller F  Youtz C  Zahn D 《Demography》1967,4(2):850-858
When percentages are computed for counts in several categories or for several positive measurements0 each taken as a fraction of their sum, the rounded percentages often fail to add to 100 percent. We investigate how frequently this failure occurs and what the distributions of sums of rounded percentages are for (1) an empirical set of data, (2) the multinomial distribution in small samples, (3) spacings between points dropped on an interval-the broken-stick model-; and (4) for simulation for several categories. The several methods produce similar distributions.We find that the probability that the sum of rounded percentages adds to exactly 100 percent is certain for two categories, about three-fourths for three categories, about two-thirds for four categories, and about [Formula: see text] for larger numbers of categories, c, on the average when categories are not improbable.  相似文献   
132.
We develop a new approach to assessing the value of home production time based on willingness to spend time and money to obtain environmental improvements. When peoples’ choice is constrained by time as well as money, measures of willingness to pay can be defined with respect to either numeraire. In a model that explicitly allows for multiple shadow values of time, we show that the willingness to pay time and money measures are linked through the value of saving time. With survey information on peoples’ willingness to spend additional time on housework activities, as well as pay money, to obtain environmental quality improvements, joint estimation within a utility-consistent structure produces estimates of both willingness to pay and the value of saving housework time. From the value of saving housework time, the marginal value of housework time can be readily identified. When applied to Korean households’ valuation of water quality improvements in the Man Kyoung River, we find that the value of housework time is 70–80% of the market wage.
Douglas M. LarsonEmail:
  相似文献   
133.
We analyzed qualitative data gathered at a selective urban university with a large black student body. We found that black students from integrated backgrounds welcomed the chance to establish friendships with same-race peers even though they were at ease in white settings, whereas students from segregated backgrounds saw same-race peers as a source of comfort and refuge from a white world often perceived as hostile. These contrasting perceptions set up both groups for shock upon matriculation. Students from an integrated background were better prepared academically and socially, but were unfamiliar with urban black culture and uncomfortable interacting with students of lower class standing. Students from a segregated background were surprised to find they had little in common with more affluent students from integrated backgrounds. Although both groups were attracted to campus for the same reason??to interact with a critical mass of same-race peers??their contrasting expectations produced a letdown as the realities of intraracial diversity set in.  相似文献   
134.
Origins of the New Latino Underclass   总被引:1,自引:0,他引:1  
Over the past four decades, the Latino population of the United States was transformed from a small, ethnically segmented population of Mexicans in the southwest, Puerto Ricans in New York, and Cubans in Miami into a large national population dominated by Mexicans, Central Americans, and South Americans. This transformation occurred through mass immigration, much of it undocumented, to the point where large fractions of non-Caribbean Hispanics lack legal protections and rights in the United States. Rising illegality is critical to understanding the disadvantaged status of Latinos today. The unauthorized population began to grow after avenues for legal entry were curtailed in 1965. The consequent rise in undocumented migration enabled political and bureaucratic entrepreneurs to frame Latino migration as a grave threat to the nation, leading to a rising frequency of negative framings in the media, a growing conservative reaction, and increasingly restrictive immigration and border policies that generated more apprehensions. Rising apprehensions, in turn, further enflamed the conservative reaction to produce even harsher enforcement and more still more apprehensions, yielding a self-feeding cycle in which apprehensions kept rising even though undocumented inflows had stabilized. The consequent militarization of the border had the perverse effect of reducing rates of out-migration rather than inhibiting in-migration, leading to a sharp rise in net undocumented population and rapid growth of the undocumented population. As a result, a majority of Mexican, Central American, and South American immigrants are presently undocumented at a time when unauthorized migrants are subject to increasing sanctions from authorities and the public, yielding downward pressure on the status and well-being of Latinos in the United States.  相似文献   
135.
This paper proposes the singly truncated normal distribution as a model for estimating radiance measurements from satellite-borne infrared sensors. These measurements are made in order to estimate sea surface temperatures which can be related to radiances. Maximum likelihood estimation is used to provide estimates for the unknown parameters. In particular, a procedure is described for estimating clear radiances in the presence of clouds and the Kolmogorov-Smirnov statistic is used to test goodness-of-fit of the measurements to the singly truncated normal distribution. Tables of quantile values of the Kolmogorov-Smirnov statistic for several values of the truncation point are generated from Monie Carlo experiment Mnally a numerical emample using satetic data is presented to illustrate the application of the procedures.  相似文献   
136.
This article tests the Fourier flexible form on quarterly U.S. monetary data. The data have been prescreened for consistency with the general axiom of revealed preference, and subindexes are formed using the Divisia approach. In this article, the global Fourier model fits well, although there is a potential problem of overfitting and certain data points exhibit behavior inconsistent with the model. The elasticities are variable over time, particularly around business-cycle troughs. It appears that financial asset demand surfaces are highly nonlinear and the many unsuccessful existing attempts to estimate money demand may not have worked well for this reason.  相似文献   
137.
We investigate methods for the design of sample surveys, and address the traditional resistance of survey samplers to the use of model-based methods by incorporating model robustness at the design stage. The designs are intended to be sufficiently flexible and robust that resulting estimates, based on the designer’s best guess at an appropriate model, remain reasonably accurate in a neighbourhood of this central model. Thus, consider a finite population of N units in which a survey variable Y is related to a q dimensional auxiliary variable x. We assume that the values of x are known for all N population units, and that we will select a sample of nN population units and then observe the n corresponding values of Y. The objective is to predict the population total $T=\sum_{i=1}^{N}Y_{i}$ . The design problem which we consider is to specify a selection rule, using only the values of the auxiliary variable, to select the n units for the sample so that the predictor has optimal robustness properties. We suppose that T will be predicted by methods based on a linear relationship between Y—possibly transformed—and given functions of x. We maximise the mean squared error of the prediction of T over realistic neighbourhoods of the fitted linear relationship, and of the assumed variance and correlation structures. This maximised mean squared error is then minimised over the class of possible samples, yielding an optimally robust (‘minimax’) design. To carry out the minimisation step we introduce a genetic algorithm and discuss its tuning for maximal efficiency.  相似文献   
138.
For Canada's boreal forest region, the accurate modelling of the timing of the appearance of aspen leaves is important to forest fire management, as it signifies the end of the spring fire season that occurs after snowmelt. This article compares two methods, a midpoint rule and a conditional expectation method used to estimate the true flush date for interval-censored data from a large set of fire-weather stations in Alberta, Canada. The conditional expectation method uses the interval censored kernel density estimator of Braun et al. (2005 Braun , J. , Duchesne , T. , Stafford , J. E. ( 2005 ). Local likelihood density estimation for interval censored data . Canadian Journal of Statistics 33 : 3960 .[Crossref], [Web of Science ®] [Google Scholar]). The methods are compared via simulation, where true flush dates were generated from a normal distribution and then converted into intervals by adding and subtracting exponential random variables. The simulation parameters were estimated from the data set and several scenarios were considered. The study reveals that the conditional expectation method is never worse than the midpoint method, and that there is a significant advantage to this method when the intervals are large. An illustration of the methodology applied to the Alberta data set is also provided.  相似文献   
139.
140.
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003 Goldstein , H. ( 2003 ). Multilevel Statistical Models. 3rd ed . London : Edward Arnold . [Google Scholar]) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号