首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   569篇
  免费   108篇
管理学   107篇
劳动科学   1篇
民族学   6篇
人口学   4篇
丛书文集   6篇
理论方法论   97篇
综合类   10篇
社会学   241篇
统计学   205篇
  2024年   1篇
  2022年   4篇
  2021年   5篇
  2020年   21篇
  2019年   64篇
  2018年   32篇
  2017年   37篇
  2016年   36篇
  2015年   57篇
  2014年   59篇
  2013年   37篇
  2012年   39篇
  2011年   33篇
  2010年   38篇
  2009年   42篇
  2008年   35篇
  2007年   14篇
  2006年   7篇
  2005年   16篇
  2004年   28篇
  2003年   19篇
  2002年   17篇
  2001年   7篇
  2000年   7篇
  1999年   11篇
  1998年   6篇
  1997年   3篇
  1989年   1篇
  1988年   1篇
排序方式: 共有677条查询结果,搜索用时 26 毫秒
51.
In this paper, we propose a simple bias–reduced log–periodogram regression estimator, ^dr, of the long–memory parameter, d, that eliminates the first– and higher–order biases of the Geweke and Porter–Hudak (1983) (GPH) estimator. The bias–reduced estimator is the same as the GPH estimator except that one includes frequencies to the power 2k for k=1,…,r, for some positive integer r, as additional regressors in the pseudo–regression model that yields the GPH estimator. The reduction in bias is obtained using assumptions on the spectrum only in a neighborhood of the zero frequency. Following the work of Robinson (1995b) and Hurvich, Deo, and Brodsky (1998), we establish the asymptotic bias, variance, and mean–squared error (MSE) of ^dr, determine the asymptotic MSE optimal choice of the number of frequencies, m, to include in the regression, and establish the asymptotic normality of ^dr. These results show that the bias of ^dr goes to zero at a faster rate than that of the GPH estimator when the normalized spectrum at zero is sufficiently smooth, but that its variance only is increased by a multiplicative constant. We show that the bias–reduced estimator ^dr attains the optimal rate of convergence for a class of spectral densities that includes those that are smooth of order s≥1 at zero when r≥(s−2)/2 and m is chosen appropriately. For s>2, the GPH estimator does not attain this rate. The proof uses results of Giraitis, Robinson, and Samarov (1997). We specify a data–dependent plug–in method for selecting the number of frequencies m to minimize asymptotic MSE for a given value of r. Some Monte Carlo simulation results for stationary Gaussian ARFIMA (1, d, 1) and (2, d, 0) models show that the bias–reduced estimators perform well relative to the standard log–periodogram regression estimator.  相似文献   
52.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   
53.
This paper studies the relation between discrete–time and continuous–time principal–agent models. We derive the continuous–time model as a limit of discrete–time models with ever shorter periods and show that optimal incentive schemes in the discrete–time models approximate the optimal incentive scheme in the continuous model, which is linear in accounts. Under the additional assumption that the principal observes only cumulative total profits at the end and the agent can destroy profits unnoticed, an incentive scheme that is linear in total profits is shown to be approximately optimal in the discrete–time model when the length of the period is small.  相似文献   
54.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   
55.
In this paper, we investigate the performance of different parametric and nonparametric approaches for analyzing overdispersed person–time–event rates in the clinical trial setting. We show that the likelihood‐based parametric approach may not maintain the right size for the tested overdispersed person–time–event data. The nonparametric approaches may use an estimator as either the mean of the ratio of number of events over follow‐up time within each subjects or the ratio of the mean of the number of events over the mean follow‐up time in all the subjects. Among these, the ratio of the means is a consistent estimator and can be studied analytically. Asymptotic properties of all estimators were studied through numerical simulations. This research shows that the nonparametric ratio of the mean estimator is to be recommended in analyzing overdispersed person–time data. When sample size is small, some resampling‐based approaches can yield satisfactory results. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
56.
Summary: Wald statistics in generalized linear models are asymptotically 2 distributed. The asymptotic chi–squared law of the corresponding quadratic form shows disadvantages with respect to the approximation of the finite–sample distribution. It is shown by means of a comprehensive simulation study that improvements can be achieved by applying simple finite–sample size approximations to the distribution of the quadratic form in generalized linear models. These approximations are based on a 2 distribution with an estimated degree of freedom that generalizes an approach by Patnaik and Pearson. Simulation studies confirm that nominal level is maintained with higher accuracy compared to the Wald statistics.  相似文献   
57.
Zusammenfassung: In dieser Studie wird ein Konzept zur Kumulation von laufenden Haushaltsbudgetbefragungen im Rahmen des Projektes Amtliche Statistik und sozioökonomische Fragestellungen entwickelt und zur Diskussion gestellt. Dafür werden die theoretischen Grundlagen und Bausteine gelegt und die zentrale Aufgabe einer strukturellen demographischen Gewichtung mit einem Hochrechnungs–/Kalibrierungsansatz auf informationstheoretischer Basis gelöst.Vor dem Hintergrund der Wirtschaftsrechnungen des Statistischen Bundesamtes (Lfd. Wirtschaftsrechnungen und EVS) wird darauf aufbauend ein konkretes Konzept für die Kumulation von jährlichen Haushaltsbudgetbefragungen vorgeschlagen. Damit kann das Ziel einer Kumulation von Querschnitten mit einer umfassenderen Kumulationsstichprobe für tief gegliederte Analysen erreicht werden. Folgen sollen die Simulationsrechnungen zur Evaluation des Konzepts.
Summary: In this study a concept for cumulating periodic household surveys within the frame of the project Official Statistics and Socio–Economic Questions is developed and asks for discussion. We develop the theoretical background and solve the central task of a structural demographic weighting/calibration based on an information theoretical approach.Based on the household budget surveys of the Federal Statistical Office (Periodic Household Budget Surveys and Income and Consumption Sample (EVS)) a practical concept is proposed to cumulate yearly household surveys. This allows a cumulation of cross–sections by a comprehensive cumulated sample for deeply structured analyses. In a following study this concept shall be evaluated.
  相似文献   
58.
Summary: The next German census will be an Administrative Record Census. Data from several administrative registers about persons will be merged. Object identification has to be applied, since no unique identification number exists in the registers. We present a two–step procedure. We briefly discuss questions like correctness and completeness of the Administrative Record Census. Then we focus on the object identification problem, that can be perceived as a special classification problem. Pairs of records are to be classified as matched or not matched. To achieve computational efficiency a preselection technique of pairs is applied. Our approach is illustrated with a database containing a large set of consumer addresses.*This work was partially supported by the Berlin–Brandenburg Graduate School in Distributed Information Systems (DFG grant no. GRK 316). The authors thank Michael Fürnrohr for previewing the paper. We would like to thank also for the helpful comments of an anonymous reviewer.  相似文献   
59.
When estimating the distributions of two random variables, X and Y, investigators often have prior information that Y tends to be bigger than X. To formalize this prior belief, one could potentially assume stochastic ordering between X and Y, which implies Pr(X < or = z) > or = Pr(Y < or = z) for all z in the domain of X and Y. Stochastic ordering is quite restrictive, though, and this article focuses instead on Bayesian estimation of the distribution functions of X and Y under the weaker stochastic precedence constraint, Pr(X < or = Y) > or = 0.5. We consider the case where both X and Y are categorical variables with common support and develop a Gibbs sampling algorithm for posterior computation. The method is then generalized to the case where X and Y are survival times. The proposed approach is illustrated using data on survival after tumor removal for patients with malignant melanoma.  相似文献   
60.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号