首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10399篇
  免费   0篇
管理学   1501篇
民族学   99篇
人口学   2408篇
理论方法论   482篇
综合类   286篇
社会学   4441篇
统计学   1182篇
  2019年   1篇
  2018年   1657篇
  2017年   1650篇
  2016年   1072篇
  2015年   34篇
  2014年   33篇
  2013年   29篇
  2012年   318篇
  2011年   1144篇
  2010年   1043篇
  2009年   782篇
  2008年   817篇
  2007年   996篇
  2005年   225篇
  2004年   250篇
  2003年   210篇
  2002年   81篇
  2001年   5篇
  2000年   10篇
  1999年   5篇
  1996年   28篇
  1992年   1篇
  1988年   8篇
排序方式: 共有10000条查询结果,搜索用时 8 毫秒
871.
Kriging models have been widely used in computer experiments for the analysis of time-consuming computer codes. Based on kernels, they are flexible and can be tuned to many situations. In this paper, we construct kernels that reproduce the computer code complexity by mimicking its interaction structure. While the standard tensor-product kernel implicitly assumes that all interactions are active, the new kernels are suited for a general interaction structure, and will take advantage of the absence of interaction between some inputs. The methodology is twofold. First, the interaction structure is estimated from the data, using a first initial standard Kriging model, and represented by a so-called FANOVA graph. New FANOVA-based sensitivity indices are introduced to detect active interactions. Then this graph is used to derive the form of the kernel, and the corresponding Kriging model is estimated by maximum likelihood. The performance of the overall procedure is illustrated by several 3-dimensional and 6-dimensional simulated and real examples. A substantial improvement is observed when the computer code has a relatively high level of complexity.  相似文献   
872.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   
873.
874.
This paper discusses a novel strategy for simulating rare events and an associated Monte Carlo estimation of tail probabilities. Our method uses a system of interacting particles and exploits a Feynman-Kac representation of that system to analyze their fluctuations. Our precise analysis of the variance of a standard multilevel splitting algorithm reveals an opportunity for improvement. This leads to a novel method that relies on adaptive levels and produces, in the limit of an idealized version of the algorithm, estimates with optimal variance. The motivation for this theoretical work comes from problems occurring in watermarking and fingerprinting of digital contents, which represents a new field of applications of rare event simulation techniques. Some numerical results show performance close to the idealized version of our technique for these practical applications.  相似文献   
875.
This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries.  相似文献   
876.
This paper compares the performance of “aggregate” and “disaggregate” predictors in forecasting contemporaneously aggregated vector MA(1) processes. The necessary and sufficient condition for the equality of mean squared errors associated with the two competing predictors is provided in the bivariate MA(1) case. Furthermore, it is argued that the condition of equality of predictors as stated by Lütkepohl (Forecasting aggregated vector ARMA processes, Springer, Berlin, 1987) is only sufficient (not necessary) for the equality of mean squared errors. Finally, it is shown that the equality of forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the vector MA(1) structure.  相似文献   
877.
We illustrate how multistate Markov and semi-Markov models can be used for the actuarial modeling of health insurance policies, focusing on health insurances that are pursued on a similar technical basis to that of life insurance. In the first part, we give an overview of the basic modeling frameworks that are commonly used and explain the calculation of prospective reserves and net premiums. In the second part, we discuss the biometric insurance risk, focusing on the calculation of implicit safety margins. We present new results on implicit margins in the semi-Markov model and on biometric estimation risk in the Markov model, and we explain why there is a need for future research concerning the systematic biometric risk.  相似文献   
878.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   
879.
Retrospectively collected duration data are often reported incorrectly. An important type of such an error is heaping—respondents tend to round-off or round-up the data according to some rule of thumb. For two special cases of the Weibull model we study the behaviour of the ‘naive estimators’, which simply ignore the measurement error due to heaping, and derive closed expressions for the asymptotic bias. These results give a formal justification of empirical evidence and simulation-based findings reported in the literature. Additionally, situations where a remarkable bias has to be expected can be identified, and an exact bias correction can be performed.  相似文献   
880.
Typically, parametric approaches to spatial problems require restrictive assumptions. On the other hand, in a wide variety of practical situations nonparametric bivariate smoothing techniques has been shown to be successfully employable for estimating small or large scale regularity factors, or even the signal content of spatial data taken as a whole.We propose a weighted local polynomial regression smoother suitable for fitting of spatial data. To account for spatial variability, we both insert a spatial contiguity index in the standard formulation, and construct a spatial-adaptive bandwidth selection rule. Our bandwidth selector depends on the Gearys local indicator of spatial association. As illustrative example, we provide a brief Monte Carlo study case on equally spaced data, the performances of our smoother and the standard polynomial regression procedure are compared.This note, though it is the result of a close collaboration, was specifically elaborated as follows: paragraphs 1 and 2 by T. Sclocco and the remainder by M. Di Marzio. The authors are grateful to the referees for constructive comments and suggestions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号