全文获取类型
收费全文 | 26556篇 |
免费 | 421篇 |
国内免费 | 1篇 |
专业分类
管理学 | 3871篇 |
民族学 | 161篇 |
人才学 | 1篇 |
人口学 | 3758篇 |
丛书文集 | 67篇 |
理论方法论 | 1956篇 |
综合类 | 689篇 |
社会学 | 11533篇 |
统计学 | 4942篇 |
出版年
2021年 | 91篇 |
2020年 | 214篇 |
2019年 | 328篇 |
2018年 | 1959篇 |
2017年 | 2129篇 |
2016年 | 1423篇 |
2015年 | 350篇 |
2014年 | 400篇 |
2013年 | 2982篇 |
2012年 | 855篇 |
2011年 | 1547篇 |
2010年 | 1378篇 |
2009年 | 1101篇 |
2008年 | 1212篇 |
2007年 | 1385篇 |
2006年 | 328篇 |
2005年 | 589篇 |
2004年 | 576篇 |
2003年 | 553篇 |
2002年 | 487篇 |
2001年 | 415篇 |
2000年 | 365篇 |
1999年 | 350篇 |
1998年 | 280篇 |
1997年 | 251篇 |
1996年 | 282篇 |
1995年 | 221篇 |
1994年 | 186篇 |
1993年 | 238篇 |
1992年 | 265篇 |
1991年 | 238篇 |
1990年 | 244篇 |
1989年 | 215篇 |
1988年 | 231篇 |
1987年 | 228篇 |
1986年 | 206篇 |
1985年 | 216篇 |
1984年 | 219篇 |
1983年 | 211篇 |
1982年 | 174篇 |
1981年 | 148篇 |
1980年 | 161篇 |
1979年 | 182篇 |
1978年 | 153篇 |
1977年 | 138篇 |
1976年 | 120篇 |
1975年 | 151篇 |
1974年 | 116篇 |
1973年 | 96篇 |
1972年 | 105篇 |
排序方式: 共有10000条查询结果,搜索用时 546 毫秒
91.
The authors review the trends in the use of computers in the delivery and support of career guidance and counseling identified at the symposium International Perspectives on Career Development. The papers presented emphasized that 20th‐century computer‐based systems continue to be used, mainly delivered via the World Wide Web. These systems are enhanced through audio, video, graphics, strategies to provide needs assessment, and support by cybercounselors or expert system design. The papers also revealed a new trend: the use of elegant Web sites to store and search immense libraries of resources needed by professionals and clients and to facilitate communication and collaboration among professionals in cyberspace. Concerns, issues, and resources related to many areas, including the readiness of clients to use computer‐based systems, were also raised; existing sources of guidelines are noted. 相似文献
92.
Philip L. H. Yu K. F. Lam S. M. Lo 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2005,168(3):583-597
Summary. Factor analysis is a powerful tool to identify the common characteristics among a set of variables that are measured on a continuous scale. In the context of factor analysis for non-continuous-type data, most applications are restricted to item response data only. We extend the factor model to accommodate ranked data. The Monte Carlo expectation–maximization algorithm is used for parameter estimation at which the E-step is implemented via the Gibbs sampler. An analysis based on both complete and incomplete ranked data (e.g. rank the top q out of k items) is considered. Estimation of the factor scores is also discussed. The method proposed is applied to analyse a set of incomplete ranked data that were obtained from a survey that was carried out in GuangZhou, a major city in mainland China, to investigate the factors affecting people's attitude towards choosing jobs. 相似文献
93.
Point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced
times, such as the earthquakes. These processes are uniquely characterized by their conditional intensity, that is, by the
probability that an event will occur in the infinitesimal interval (t, t+Δt), given the history of the process up tot. The seismic phenomenon displays different behaviours on different time and size scales; in particular, the occurrence of
destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory
has inspired the so-called stress release models: their conditional intensity translates the idea that an earthquake produces
a sudden decrease in the amount of strain accumulated gradually over time along a fault, and the subsequent event occurs when
the stress exceeds the strength of the medium. This study has a double objective: the formulation of these models in the Bayesian
framework, and the assignment to each event of a mark, that is its magnitude, modelled through a distribution that depends
at timet on the stress level accumulated up to that instant. The resulting parameter space is constrained and dependent on the data,
complicating Bayesian computation and analysis. We have resorted to Monte Carlo methods to solve these problems. 相似文献
94.
Christian P. Robert Xiao-Li Meng Jesper Møller Jeffrey S Rosenthal C Jennison M. A Hurn F Al-Awadhi Peter McCullagh Christophe Andrieu Arnaud Doucet Petros Dellaportas Ioulia Papageorgiou Ricardo S Ehlers Elena A Erosheva Stephen E Fienberg Jonathan J Forster Roger C Gill Nial Friel Peter Green David Hastie R King Hans R Künsch N. A. Lazar C Osinski 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(1):39-55
95.
Silphids in urban forests: Diversity and function 总被引:2,自引:0,他引:2
Many ecologists have examined the process of how urbanization reduces biological diversity but rarely have its ecological consequences been assessed. We studied forest-dwelling burying beetles (Coleoptera: Silphidae)—a guild of insects that requires carrion to complete their life cycles—along an urban-rural gradient of land use in Maryland. Our objective was to determine how forest fragmentation associated with urbanization affects (1) beetle community diversity and structure and (2) the ecological function provided by these insects, that is, decomposition of vertebrate carcasses. Forest fragmentation strongly reduced burying beetle diversity and abundance, and did so far more pervasively than urbanization of the surrounding landscape. The likelihood that beetles interred experimental baits was a direct, positive function of burying beetle diversity. We conclude that loss of burying beetle diversity resulting from forest fragmentation could have important ecological consequences in urban forests. 相似文献
96.
Cathy W. S. Chen F. C. Liu Mike K. P. So 《Australian & New Zealand Journal of Statistics》2008,50(1):29-51
To capture mean and variance asymmetries and time‐varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy‐tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time‐delay parameter. Self‐exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value‐at‐risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models. 相似文献
97.
Michael S. Rendall Ryan Admiraal Alessandra DeRose Paola DiGiulio Mark S. Handcock Filomena Racioppi 《Statistical Methods and Applications》2008,17(4):519-539
In non-experimental research, data on the same population process may be collected simultaneously by more than one instrument.
For example, in the present application, two sample surveys and a population birth registration system all collect observations
on first births by age and year, while the two surveys additionally collect information on women’s education. To make maximum
use of the three data sources, the survey data are pooled and the population data introduced as constraints in a logistic
regression equation. Reductions in standard errors about the age and birth-cohort parameters of the regression equation in
the order of three-quarters are obtained by introducing the population data as constraints. A halving of the standard errors
about the education parameters is achieved by pooling observations from the larger survey dataset with those from the smaller
survey. The percentage reduction in the standard errors through imposing population constraints is independent of the total
survey sample size. 相似文献
98.
99.
We examine how attention to animacy information may contribute to children's developing knowledge of language. This research extends beyond prior research in that children were shown dynamic events with novel entities, and were asked not only to comprehend sentences but to use sentence structure to infer the meaning of a new word. In a 4 × 3 design, animacy status (e.g., animate agent, inanimate patient) and labeling syntax (agent, patient, nonlabel control) were varied. Across most events, 2 1/2‐year‐old participants responded as if they expected animate entities to be named. However, in a prototypical (animate agent‐inanimate patient) event condition, children responded differentially across different syntactic structures. Thus, the clearest evidence for attention to syntactic cues was found in the prototypical event condition. These results suggest that young children attend to the animacy status of unfamiliar entities, that they have expectations about animacy relations in events, and that these expectations support emerging syntactic knowledge. 相似文献
100.
Stuart Barber Guy P. Nason 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):927-939
Summary. Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients. 相似文献