首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19866篇
  免费   393篇
管理学   2259篇
民族学   108篇
人才学   7篇
人口学   1884篇
丛书文集   115篇
教育普及   2篇
理论方法论   1904篇
现状及发展   1篇
综合类   251篇
社会学   10070篇
统计学   3658篇
  2023年   106篇
  2022年   82篇
  2021年   103篇
  2020年   328篇
  2019年   498篇
  2018年   538篇
  2017年   732篇
  2016年   518篇
  2015年   398篇
  2014年   476篇
  2013年   3365篇
  2012年   706篇
  2011年   634篇
  2010年   500篇
  2009年   468篇
  2008年   542篇
  2007年   550篇
  2006年   507篇
  2005年   458篇
  2004年   462篇
  2003年   419篇
  2002年   420篇
  2001年   494篇
  2000年   398篇
  1999年   389篇
  1998年   321篇
  1997年   293篇
  1996年   293篇
  1995年   283篇
  1994年   273篇
  1993年   269篇
  1992年   301篇
  1991年   290篇
  1990年   268篇
  1989年   263篇
  1988年   250篇
  1987年   231篇
  1986年   223篇
  1985年   253篇
  1984年   243篇
  1983年   241篇
  1982年   179篇
  1981年   164篇
  1980年   177篇
  1979年   172篇
  1978年   143篇
  1977年   140篇
  1976年   124篇
  1975年   119篇
  1974年   99篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
A sample size justification should be given for all clinical investigations. However, sometimes the objective of a trial is to estimate an effect with a view to planning a later definitive study. This paper describes the calculations for designing studies where one wishes to adopt an estimation approach through using confidence intervals around the overall response. Calculations are given for data anticipated to take a Normal form. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
992.
When elections are close in time, voters may stick to their preferred party or chose a different option for several reasons; reliable estimates of the amount of transitions across the available options can allow to answer a number of relevant questions about electoral behaviour. We describe a modified version of the model due to Brown and Payne (J Am Stat Assoc 81:453–460, 1986) and argue that it is based on simple, yet realistic, assumptions with a direct interpretation in terms of individual behaviour and compares well with other models proposed more recently. We apply the model to an Italian borough where, during June 2009, two elections were held simultaneously and a runoff took place two weeks later. Estimates of the joint distribution of voters between the European Parliament election and the other two elections provide evidence of substantially different kinds of voting behaviour which, given the specific context, we interpret in the light of the recent literature on the subject.  相似文献   
993.
The problem of estimation of the derivative of a probability density f is considered, using wavelet orthogonal bases. We consider an important kind of dependent random variables, the so-called mixing random variables and investigate the precise asymptotic expression for the mean integrated error of the wavelet estimators. We show that the mean integrated error of the proposed estimator attains the same rate as when the observations are independent, under certain week dependence conditions imposed to the {X i }, defined in {Ω, N, P}.  相似文献   
994.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   
995.
This paper discusses a novel strategy for simulating rare events and an associated Monte Carlo estimation of tail probabilities. Our method uses a system of interacting particles and exploits a Feynman-Kac representation of that system to analyze their fluctuations. Our precise analysis of the variance of a standard multilevel splitting algorithm reveals an opportunity for improvement. This leads to a novel method that relies on adaptive levels and produces, in the limit of an idealized version of the algorithm, estimates with optimal variance. The motivation for this theoretical work comes from problems occurring in watermarking and fingerprinting of digital contents, which represents a new field of applications of rare event simulation techniques. Some numerical results show performance close to the idealized version of our technique for these practical applications.  相似文献   
996.
Everyday we face all kinds of risks, and insurance is in the business of providing us a means to transfer or share these risks, usually to eliminate or reduce the resulting financial burden, in exchange for a predetermined price or tariff. Actuaries are considered professional experts in the economic assessment of uncertain events, and equipped with many statistical tools for analytics, they help formulate a fair and reasonable tariff associated with these risks. An important part of the process of establishing fair insurance tariffs is risk classification, which involves the grouping of risks into various classes that share a homogeneous set of characteristics allowing the actuary to reasonably price discriminate. This article is a survey paper on the statistical tools for risk classification used in insurance. Because of recent availability of more complex data in the industry together with the technology to analyze these data, we additionally discuss modern techniques that have recently emerged in the statistics discipline and can be used for risk classification. While several of the illustrations discussed in the paper focus on general, or non-life, insurance, several of the principles we examine can be similarly applied to life insurance. Furthermore, we also distinguish between a priori and a posteriori ratemaking. The former is a process which forms the basis for ratemaking when a policyholder is new and insufficient information may be available. The latter process uses additional historical information about policyholder claims when this becomes available. In effect, the resulting a posteriori premium allows one to correct and adjust the previous a priori premium making the price discrimination even more fair and reasonable.  相似文献   
997.
We prove that the complete graph Kv can be decomposed into dodecahedra if and only if v≡ 1, 16, 25 or 40 (mod 60), v≠16.  相似文献   
998.
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao-Cramer lower bound) in the limit of small noise.  相似文献   
999.
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.  相似文献   
1000.
In statistical analysis, particularly in econometrics, it is usual to consider regression models where the dependent variable is censored (limited). In particular, a censoring scheme to the left of zero is considered here. In this article, an extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution. In the context of maximum likelihood estimation, an expression for the expected information matrix is provided, and an efficient EM-type algorithm for the estimation of the model parameters is developed. In order to know what type of variables affect the income of housewives, the results and methods are applied to a real data set. A brief review on the normal censored regression model or Tobit model is also presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号