首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   957篇
  免费   21篇
管理学   163篇
民族学   8篇
人口学   124篇
丛书文集   10篇
理论方法论   80篇
综合类   35篇
社会学   417篇
统计学   141篇
  2023年   5篇
  2022年   2篇
  2021年   12篇
  2020年   24篇
  2019年   30篇
  2018年   44篇
  2017年   50篇
  2016年   42篇
  2015年   19篇
  2014年   29篇
  2013年   167篇
  2012年   35篇
  2011年   50篇
  2010年   45篇
  2009年   39篇
  2008年   37篇
  2007年   31篇
  2006年   25篇
  2005年   28篇
  2004年   25篇
  2003年   15篇
  2002年   35篇
  2001年   13篇
  2000年   15篇
  1999年   19篇
  1998年   12篇
  1997年   10篇
  1996年   11篇
  1995年   6篇
  1994年   13篇
  1993年   6篇
  1992年   6篇
  1991年   5篇
  1990年   7篇
  1989年   5篇
  1988年   7篇
  1987年   5篇
  1986年   5篇
  1985年   5篇
  1984年   4篇
  1983年   10篇
  1982年   4篇
  1981年   5篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1975年   2篇
  1971年   1篇
  1970年   1篇
  1969年   2篇
排序方式: 共有978条查询结果,搜索用时 18 毫秒
21.
ABSTRACT

We propose two non parametric portmanteau test statistics for serial dependence in high dimensions using the correlation integral. One test depends on a cutoff threshold value, while the other test is freed of this dependence. Although these tests may each be viewed as variants of the classical Brock, Dechert, and Scheinkman (BDS) test statistic, they avoid some of the major weaknesses of this test. We establish consistency and asymptotic normality of both portmanteau tests. Using Monte Carlo simulations, we investigate the small sample properties of the tests for a variety of data generating processes with normally and uniformly distributed innovations. We show that asymptotic theory provides accurate inference in finite samples and for relatively high dimensions. This is followed by a power comparison with the BDS test, and with several rank-based extensions of the BDS tests that have recently been proposed in the literature. Two real data examples are provided to illustrate the use of the test procedure.  相似文献   
22.
ABSTRACT

The most important factor in kernel regression is a choice of a bandwidth. Considerable attention has been paid to extension the idea of an iterative method known for a kernel density estimate to kernel regression. Data-driven selectors of the bandwidth for kernel regression are considered. The proposed method is based on an optimally balanced relation between the integrated variance and the integrated square bias. This approach leads to an iterative quadratically convergent process. The analysis of statistical properties shows the rationale of the proposed method. In order to see statistical properties of this method the consistency is determined. The utility of the method is illustrated through a simulation study and real data applications.  相似文献   
23.
In this paper a specification strategy is proposed for the determination of the orders in ARMA models. The strategy is based on two newly defined concepts: the q-conditioned partial auto-regressive function and the p-conditioned partial moving average function. These concepts are similar to the generalized partial autocorrelation function which has been recently suggested for order determination. The main difference is that they are defined and employed in connection with an asymptotically efficient estimation method instead of the rather inefficient generalized Yule-Walker method. The specification is performed by using sequential Wald type tests. In contrast to the traditional testing of hypotheses, these tests use critical values which increase with the sample size at an appropriate rate  相似文献   
24.
Let X1,X2,… Xn be a sample of independent identically distributed (i.i.d)random variables having an unknown absolutely continuous distribution function f with density f the twofold aim of his paper consists in, firstly deriving asymptotic expressions of the mean intergrated squared error (MISE) of a kernel estimator of F when f is either assumed to be continuous everywhere or problem of finding optimal kernels in these two cases is studied in detail.  相似文献   
25.
Several important economic time series are recorded on a particular day every week. Seasonal adjustment of such series is difficult because the number of weeks varies between 52 and 53 and the position of the recording day changes from year to year. In addition certain festivals, most notably Easter, take place at different times according to the year. This article presents a solution to problems of this kind by setting up a structural time series model that allows the seasonal pattern to evolve over time and enables trend extraction and seasonal adjustment to be carried out by means of state-space filtering and smoothing algorithms. The method is illustrated with a Bank of England series on the money supply.  相似文献   
26.
27.
We compare and investigate Neyman's smooth test, its components, and the Kolmogorov-Smirnov (KS) goodness-of-fit test for testing the uniformity of multivariate forecast densities. Simulations indicate that the KS test lacks power when the forecast distributions are misspecified, especially for correlated sequences of random variables. Neyman's smooth test and its components work well in samples of size typically available, although there sometimes are size distortions. The components provide directed diagnosis regarding the kind of departure from the null. For illustration, the tests are applied to forecast densities obtained from a bivariate threshold model fitted to high-frequency financial data.  相似文献   
28.
The North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) have focused attention on risk assessment of potential insect, weed, and animal pests and diseases of livestock. These risks have traditionally been addressed through quarantine protocols ranging from limits on the geographical areas from which a product may originate, postharvest disinfestation procedures like fumigation, and inspections at points of export and import, to outright bans. To ensure that plant and animal protection measures are not used as nontariff trade barriers, GATT and NAFTA require pest risk analysis (PRA) to support quarantine decisions. The increased emphasis on PRA has spurred multiple efforts at the national and international level to design frameworks for the conduct of these analyses. As approaches to pest risk analysis proliferate, and the importance of the analyses grows, concerns have arisen about the scientific and technical conduct of pest risk analysis. In January of 1997, the Harvard Center for Risk Analysis (HCRA) held an invitation-only workshop in Washington, D.C. to bring experts in risk analysis and pest characterization together to develop general principles for pest risk analysis. Workshop participants examined current frameworks for PRA, discussed strengths and weaknesses of the approaches, and formulated principles, based on years of experience with risk analysis in other setting and knowledge of the issues specific to analysis of pests. The principles developed highlight the both the similarities of pest risk analysis to other forms of risk analysis, and its unique attributes.  相似文献   
29.
We consider the situation in which a buying organization deals with a discrete quantity discount schedule offered by a selling organization. Furthermore, the buying organization can negotiate with the selling organization about the lot size and purchase price, but does not know the underlying function that was used by the selling organization to determine the quantity discount schedule. In this paper, we provide an analytical and empirical basis for one general quantity discount function (QDF) that can be used to describe the underlying function of almost all different quantity discount types. We first develop such a QDF analytically. Among other things, this QDF enables buying organizations to calculate detailed prices for a large number of quantities. We subsequently show that the QDF fits very well with 66 discount schedules found in practice. We discuss that the QDF and related indicators can be a useful tool in supplier selection and negotiation processes. It can also be used for competitive analyses, multiple sourcing decisions, and allocating savings for purchasing groups. Additionally, the QDF can be included in research models incorporating quantity discounts. We conclude the paper with an outlook on further QDF research regarding the characterization of commodity markets from a demand elasticity point of view.  相似文献   
30.
This article considers the analysis of complex monitored health data, where often one or several signals are reflecting the current health status that can be represented by a finite number of states, in addition to a set of covariates. In particular, we consider a novel application of a non-parametric state intensity regression method in order to study time-dependent effects of covariates on the state transition intensities. The method can handle baseline, time varying as well as dynamic covariates. Because of the non-parametric nature, the method can handle different data types and challenges under minimal assumptions. If the signal that is reflecting the current health status is of continuous nature, we propose the application of a weighted median and a hysteresis filter as data pre-processing steps in order to facilitate robust analysis. In intensity regression, covariates can be aggregated by a suitable functional form over a time history window. We propose to study the estimated cumulative regression parameters for different choices of the time history window in order to investigate short- and long-term effects of the given covariates. The proposed framework is discussed and applied to resuscitation data of newborns collected in Tanzania.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号