首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3934篇
  免费   149篇
  国内免费   62篇
管理学   862篇
民族学   3篇
人才学   1篇
人口学   40篇
丛书文集   153篇
理论方法论   66篇
综合类   1989篇
社会学   64篇
统计学   967篇
  2024年   8篇
  2023年   33篇
  2022年   51篇
  2021年   43篇
  2020年   66篇
  2019年   87篇
  2018年   97篇
  2017年   142篇
  2016年   154篇
  2015年   175篇
  2014年   210篇
  2013年   512篇
  2012年   368篇
  2011年   262篇
  2010年   214篇
  2009年   199篇
  2008年   188篇
  2007年   200篇
  2006年   231篇
  2005年   171篇
  2004年   141篇
  2003年   110篇
  2002年   90篇
  2001年   95篇
  2000年   61篇
  1999年   51篇
  1998年   22篇
  1997年   31篇
  1996年   28篇
  1995年   22篇
  1994年   13篇
  1993年   15篇
  1992年   19篇
  1991年   6篇
  1990年   9篇
  1989年   6篇
  1988年   6篇
  1987年   5篇
  1986年   1篇
  1984年   1篇
  1982年   1篇
  1977年   1篇
排序方式: 共有4145条查询结果,搜索用时 15 毫秒
71.
We consider a semi-parametric approach to perform the joint segmentation of multiple series sharing a common functional part. We propose an iterative procedure based on Dynamic Programming for the segmentation part and Lasso estimators for the functional part. Our Lasso procedure, based on the dictionary approach, allows us to both estimate smooth functions and functions with local irregularity, which permits more flexibility than previous proposed methods. This yields to a better estimation of the functional part and improvements in the segmentation. The performance of our method is assessed using simulated data and real data from agriculture and geodetic studies. Our estimation procedure results to be a reliable tool to detect changes and to obtain an interpretable estimation of the functional part of the model in terms of known functions.  相似文献   
72.
This paper is concerned with the Bayesian estimation parameters of the stochastic SIR (Susceptible-Infective-Removed) epidemic model from the trajectory data. Specifically, the data from the count of both infectives and susceptibles is assumed to be available on some time grid as the epidemic progresses. The diffusion approximation of the appropriate jump process is then used to estimate missing data between every pair of observation times. If the time step of imputations is small enough, we derive the posterior distributions of the infection and recovery rates using the Milstein scheme. The paper also presents Markov-chain Monte Carlo (MCMC) simulation that demonstrates that the method provides accurate estimates, as illustrated by the synthetic data from SIR epidemic model and the real data.  相似文献   
73.
Floods are a natural hazard evolving in space and time according to meteorological and river basin dynamics, so that a single flood event can affect different regions over the event duration. This physical mechanism introduces spatio‐temporal relationships between flood records and losses at different locations over a given time window that should be taken into account for an effective assessment of the collective flood risk. However, since extreme floods are rare events, the limited number of historical records usually prevents a reliable frequency analysis. To overcome this limit, we move from the analysis of extreme events to the modeling of continuous stream flow records preserving spatio‐temporal correlation structures of the entire process, and making a more efficient use of the information provided by continuous flow records. The approach is based on the dynamic copula framework, which allows for splitting the modeling of spatio‐temporal properties by coupling suitable time series models accounting for temporal dynamics, and multivariate distributions describing spatial dependence. The model is applied to 490 stream flow sequences recorded across 10 of the largest river basins in central and eastern Europe (Danube, Rhine, Elbe, Oder, Waser, Meuse, Rhone, Seine, Loire, and Garonne). Using available proxy data to quantify local flood exposure and vulnerability, we show that the temporal dependence exerts a key role in reproducing interannual persistence, and thus magnitude and frequency of annual proxy flood losses aggregated at a basin‐wide scale, while copulas allow the preservation of the spatial dependence of losses at weekly and annual time scales.  相似文献   
74.
Data envelopment analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency [B. Hollingsworth, The measurement of efficiency and productivity of health care delivery. Health Economics 17(10) (2008), pp. 1107–1128], but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized [B.J. Gajewski, R. Lee, M. Bott, U. Piamjariyakul, and R.L. Taunton, On estimating the distribution of data envelopment analysis efficiency scores: an application to nursing homes’ care planning process. Journal of Applied Statistics 36(9) (2009), pp. 933–944; J. Ruggiero, Data envelopment analysis with stochastic data. Journal of the Operational Research Society 55 (2004), pp. 1008–1012]. We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible.  相似文献   
75.
We propose new dynamic measures of uncertainty based on the notion of generalized dynamic entropy introduced in Di Crescenzo and Longobardi (2006). These can uniquely determine distribution functions in continuous and discrete cases, and the characterizations of some well-known distributions are provided. We also define some orderings and aging notions based on the generalized dynamic measures, and prove some of their properties, obtaining as corollaries results that have recently appeared in the literature.  相似文献   
76.
采用4种Backtesting检验方法,检验22个常态和时变投资组合动态VaR预测模型的风险预测精度,发现GJR_GPD_TV_Copula具有最高的投资组合风险预测精度,GJR_GPD_Copula的拟合、密度预测和组合风险预测精度都要高于GJR_SKST_Copula,且Copula模型的组合风险预测精度分别与拟合精度和密度预测精度存在较弱的正相关关系.  相似文献   
77.
This research was motivated by our goal to design an efficient clinical trial to compare two doses of docosahexaenoic acid supplementation for reducing the rate of earliest preterm births (ePTB) and/or preterm births (PTB). Dichotomizing continuous gestational age (GA) data using a classic binomial distribution will result in a loss of information and reduced power. A distributional approach is an improved strategy to retain statistical power from the continuous distribution. However, appropriate distributions that fit the data properly, particularly in the tails, must be chosen, especially when the data are skewed. A recent study proposed a skew-normal method. We propose a three-component normal mixture model and introduce separate treatment effects at different components of GA. We evaluate operating characteristics of mixture model, beta-binomial model, and skew-normal model through simulation. We also apply these three methods to data from two completed clinical trials from the USA and Australia. Finite mixture models are shown to have favorable properties in PTB analysis but minimal benefit for ePTB analysis. Normal models on log-transformed data have the largest bias. Therefore we recommend finite mixture model for PTB study. Either finite mixture model or beta-binomial model is acceptable for ePTB study.  相似文献   
78.
Minimum information bivariate distributions with uniform marginals and a specified rank correlation are studied in this paper. These distributions play an important role in a particular way of modeling dependent random variables which has been used in the computer code UNICORN for carrying out uncertainty analyses. It is shown that these minimum information distributions have a particular form which makes simulation of conditional distributions very simple. Approximations to the continuous distributions are discussed and explicit formulae are determined. Finally a relation is discussed to DAD theorems, and a numerical algorithm is given (which has geometric rate of covergence) for determining the minimum information distributions.  相似文献   
79.
In this paper the Bayesian analysis of incomplete categorical data under informative general censoring proposed by Paulino and Pereira (1995) is revisited. That analysis is based on Dirichlet priors and can be applied to any missing data pattern. However, the known properties of the posterior distributions are scarce and therefore severe limitations to the posterior computations remain. Here is shown how a Monte Carlo simulation approach based on an alternative parameterisation can be used to overcome the former computational difficulties. The proposed simulation approach makes available the approximate estimation of general parametric functions and can be implemented in a very straightforward way.  相似文献   
80.
In this article, the least squares (LS) estimates of the parameters of periodic autoregressive (PAR) models are investigated for various distributions of error terms via Monte-Carlo simulation. Beside the Gaussian distribution, this study covers the exponential, gamma, student-t, and Cauchy distributions. The estimates are compared for various distributions via bias and MSE criterion. The effect of other factors are also examined as the non-constancy of model orders, the non-constancy of the variances of seasonal white noise, the period length, and the length of the time series. The simulation results indicate that this method is in general robust for the estimation of AR parameters with respect to the distribution of error terms and other factors. However, the estimates of those parameters were, in some cases, noticeably poor for Cauchy distribution. It is also noticed that the variances of estimates of white noise variances are highly affected by the degree of skewness of the distribution of error terms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号