全文获取类型
收费全文 | 11283篇 |
免费 | 21篇 |
专业分类
管理学 | 1650篇 |
民族学 | 106篇 |
人口学 | 2503篇 |
丛书文集 | 10篇 |
理论方法论 | 560篇 |
综合类 | 315篇 |
社会学 | 4840篇 |
统计学 | 1320篇 |
出版年
2023年 | 5篇 |
2022年 | 2篇 |
2021年 | 12篇 |
2020年 | 24篇 |
2019年 | 30篇 |
2018年 | 1684篇 |
2017年 | 1689篇 |
2016年 | 1110篇 |
2015年 | 52篇 |
2014年 | 62篇 |
2013年 | 195篇 |
2012年 | 352篇 |
2011年 | 1181篇 |
2010年 | 1080篇 |
2009年 | 815篇 |
2008年 | 850篇 |
2007年 | 1022篇 |
2006年 | 26篇 |
2005年 | 251篇 |
2004年 | 275篇 |
2003年 | 226篇 |
2002年 | 116篇 |
2001年 | 18篇 |
2000年 | 25篇 |
1999年 | 24篇 |
1998年 | 12篇 |
1997年 | 10篇 |
1996年 | 39篇 |
1995年 | 6篇 |
1994年 | 13篇 |
1993年 | 6篇 |
1992年 | 6篇 |
1991年 | 5篇 |
1990年 | 7篇 |
1989年 | 5篇 |
1988年 | 15篇 |
1987年 | 5篇 |
1986年 | 5篇 |
1985年 | 5篇 |
1984年 | 4篇 |
1983年 | 10篇 |
1982年 | 4篇 |
1981年 | 5篇 |
1980年 | 2篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1975年 | 2篇 |
1971年 | 1篇 |
1970年 | 1篇 |
1969年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
131.
132.
Gray George M. Allen Jon C. Burmaster David E. Gage Stuart H. Hammitt James K. Kaplan Stanley Keeney Ralph L. Morse Joseph G. North D. Warner Nyrop Jan P. Stahevitch Alina Williams Richard 《Risk analysis》1998,18(6):773-780
The North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) have focused attention on risk assessment of potential insect, weed, and animal pests and diseases of livestock. These risks have traditionally been addressed through quarantine protocols ranging from limits on the geographical areas from which a product may originate, postharvest disinfestation procedures like fumigation, and inspections at points of export and import, to outright bans. To ensure that plant and animal protection measures are not used as nontariff trade barriers, GATT and NAFTA require pest risk analysis (PRA) to support quarantine decisions. The increased emphasis on PRA has spurred multiple efforts at the national and international level to design frameworks for the conduct of these analyses. As approaches to pest risk analysis proliferate, and the importance of the analyses grows, concerns have arisen about the scientific and technical conduct of pest risk analysis. In January of 1997, the Harvard Center for Risk Analysis (HCRA) held an invitation-only workshop in Washington, D.C. to bring experts in risk analysis and pest characterization together to develop general principles for pest risk analysis. Workshop participants examined current frameworks for PRA, discussed strengths and weaknesses of the approaches, and formulated principles, based on years of experience with risk analysis in other setting and knowledge of the issues specific to analysis of pests. The principles developed highlight the both the similarities of pest risk analysis to other forms of risk analysis, and its unique attributes. 相似文献
133.
Jan Bartlema 《Revue europeenne de demographie》1988,4(3):197-221
A combined marco-micro model is applied to a population similar to that forecast for 2035 in the Netherlands in order to simulate the effect on kinship networks of a mating system of serial monogamy. The importance of incorporating a parameter for the degree of concentration of childbearing over the female population is emphasized. The inputs to the model are vectors of fertility rates by age of mother, and by age of father, a matrix of first-marriage rates by age of both partners (used in the macro-analytical expressions), and two parametersH andS (used in the micro-simulation phase). The output is a data base of hypothetical individuals, whose records contain identification number, age, sex, and the identification numbers of their relatives. 相似文献
134.
Ole Klungsøyr Joe Sexton Inger Sandanger Jan F. Nygård 《Journal of applied statistics》2013,40(4):843-861
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found. 相似文献
135.
The paper investigates random processes of geometrical objects in Euclidean spaces. General properties of the measure of total projections are derived by means of Palm distribution. Explicit formulas for variances of the projection measure are obtained for Poisson point processes of compact sets. Intensity estimators of fibre (surface) processes are then studied by means of projection measures. Classification of direct and indirect probes is introduced. The indirect sampling design of vertical sections and projections is generalized and its statistical properties derived. 相似文献
136.
Alessandro Di Bucchianico Jan Friso Groote Kees Van Hee Ronald Kruidhof 《统计学通讯:模拟与计算》2013,42(2):346-359
Common software release procedures based on statistical techniques try to optimize the trade-off between further testing costs and costs due to remaining errors. We propose new software release procedures where the aim is to certify with a certain confidence level that the software does not contain errors. The underlying model is a discrete time model similar to the geometric Moranda model. The decisions are based on a mix of classical and Bayesian approaches to sequential testing and do not require any assumption on the initial number of errors. 相似文献
137.
In this paper a specification strategy is proposed for the determination of the orders in ARMA models. The strategy is based on two newly defined concepts: the q-conditioned partial auto-regressive function and the p-conditioned partial moving average function. These concepts are similar to the generalized partial autocorrelation function which has been recently suggested for order determination. The main difference is that they are defined and employed in connection with an asymptotically efficient estimation method instead of the rather inefficient generalized Yule-Walker method. The specification is performed by using sequential Wald type tests. In contrast to the traditional testing of hypotheses, these tests use critical values which increase with the sample size at an appropriate rate 相似文献
138.
ABSTRACTThe most important factor in kernel regression is a choice of a bandwidth. Considerable attention has been paid to extension the idea of an iterative method known for a kernel density estimate to kernel regression. Data-driven selectors of the bandwidth for kernel regression are considered. The proposed method is based on an optimally balanced relation between the integrated variance and the integrated square bias. This approach leads to an iterative quadratically convergent process. The analysis of statistical properties shows the rationale of the proposed method. In order to see statistical properties of this method the consistency is determined. The utility of the method is illustrated through a simulation study and real data applications. 相似文献
139.
ABSTRACTWe propose two non parametric portmanteau test statistics for serial dependence in high dimensions using the correlation integral. One test depends on a cutoff threshold value, while the other test is freed of this dependence. Although these tests may each be viewed as variants of the classical Brock, Dechert, and Scheinkman (BDS) test statistic, they avoid some of the major weaknesses of this test. We establish consistency and asymptotic normality of both portmanteau tests. Using Monte Carlo simulations, we investigate the small sample properties of the tests for a variety of data generating processes with normally and uniformly distributed innovations. We show that asymptotic theory provides accurate inference in finite samples and for relatively high dimensions. This is followed by a power comparison with the BDS test, and with several rank-based extensions of the BDS tests that have recently been proposed in the literature. Two real data examples are provided to illustrate the use of the test procedure. 相似文献
140.
Several important economic time series are recorded on a particular day every week. Seasonal adjustment of such series is difficult because the number of weeks varies between 52 and 53 and the position of the recording day changes from year to year. In addition certain festivals, most notably Easter, take place at different times according to the year. This article presents a solution to problems of this kind by setting up a structural time series model that allows the seasonal pattern to evolve over time and enables trend extraction and seasonal adjustment to be carried out by means of state-space filtering and smoothing algorithms. The method is illustrated with a Bank of England series on the money supply. 相似文献