全文获取类型
收费全文 | 15981篇 |
免费 | 560篇 |
国内免费 | 213篇 |
专业分类
管理学 | 1863篇 |
劳动科学 | 2篇 |
民族学 | 63篇 |
人才学 | 3篇 |
人口学 | 313篇 |
丛书文集 | 784篇 |
理论方法论 | 354篇 |
综合类 | 7517篇 |
社会学 | 526篇 |
统计学 | 5329篇 |
出版年
2024年 | 21篇 |
2023年 | 126篇 |
2022年 | 220篇 |
2021年 | 246篇 |
2020年 | 368篇 |
2019年 | 466篇 |
2018年 | 526篇 |
2017年 | 664篇 |
2016年 | 547篇 |
2015年 | 560篇 |
2014年 | 888篇 |
2013年 | 2116篇 |
2012年 | 1176篇 |
2011年 | 1018篇 |
2010年 | 843篇 |
2009年 | 829篇 |
2008年 | 911篇 |
2007年 | 883篇 |
2006年 | 810篇 |
2005年 | 683篇 |
2004年 | 569篇 |
2003年 | 486篇 |
2002年 | 421篇 |
2001年 | 363篇 |
2000年 | 229篇 |
1999年 | 177篇 |
1998年 | 97篇 |
1997年 | 98篇 |
1996年 | 72篇 |
1995年 | 62篇 |
1994年 | 45篇 |
1993年 | 39篇 |
1992年 | 37篇 |
1991年 | 40篇 |
1990年 | 24篇 |
1989年 | 18篇 |
1988年 | 15篇 |
1987年 | 9篇 |
1986年 | 7篇 |
1985年 | 13篇 |
1984年 | 8篇 |
1983年 | 9篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1977年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
11.
Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix 总被引:1,自引:0,他引:1
Maurice J. G. Bun 《Econometric Reviews》2003,22(1):29-58
Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J. Econometrics68:53-78; Kiviet, J. F. (1999), Expectations of expansions for estimators in a dynamic panel data model: some results for weakly exogenous regressors, In: Hsiao, C., Lahiri, K., Lee, L-F., Pesaran, M. H., eds., Analysis of Panels and Limited Dependent Variables, Cambridge: Cambridge University Press, pp. 199-225] are extended to higher-order dynamic panel data models with general covariance structure. The focus is on estimation of both short- and long-run coefficients. The results show that proper modelling of the disturbance covariance structure is indispensable. The bias approximations are used to construct bias corrected estimators which are then applied to quarterly data from 14 European Union countries. Money demand functions for M1, M2 and M3 are estimated for the EU area as a whole for the period 1991: I-1995: IV. Significant spillovers between countries are found reflecting the dependence of domestic money demand on foreign developments. The empirical results show that in general plausible long-run effects are obtained by the bias corrected estimators. Moreover, finite sample bias, although of moderate magnitude, is present underlining the importance of more refined estimation techniques. Also the efficiency gains by exploiting the heteroscedasticity and cross-correlation patterns between countries are sometimes considerable. 相似文献
12.
John S. J. HSU 《Revue canadienne de statistique》1995,23(4):399-410
This paper presents a new Laplacian approximation to the posterior density of η = g(θ). It has a simpler analytical form than that described by Leonard et al. (1989). The approximation derived by Leonard et al. requires a conditional information matrix Rη to be positive definite for every fixed η. However, in many cases, not all Rη are positive definite. In such cases, the computations of their approximations fail, since the approximation cannot be normalized. However, the new approximation may be modified so that the corresponding conditional information matrix can be made positive definite for every fixed η. In addition, a Bayesian procedure for contingency-table model checking is provided. An example of cross-classification between the educational level of a wife and fertility-planning status of couples is used for explanation. Various Laplacian approximations are computed and compared in this example and in an example of public school expenditures in the context of Bayesian analysis of the multiparameter Fisher-Behrens problem. 相似文献
13.
Generalized additive models for location, scale and shape 总被引:10,自引:0,他引:10
R. A. Rigby D. M. Stasinopoulos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):507-554
Summary. A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. 相似文献
14.
Point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced
times, such as the earthquakes. These processes are uniquely characterized by their conditional intensity, that is, by the
probability that an event will occur in the infinitesimal interval (t, t+Δt), given the history of the process up tot. The seismic phenomenon displays different behaviours on different time and size scales; in particular, the occurrence of
destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory
has inspired the so-called stress release models: their conditional intensity translates the idea that an earthquake produces
a sudden decrease in the amount of strain accumulated gradually over time along a fault, and the subsequent event occurs when
the stress exceeds the strength of the medium. This study has a double objective: the formulation of these models in the Bayesian
framework, and the assignment to each event of a mark, that is its magnitude, modelled through a distribution that depends
at timet on the stress level accumulated up to that instant. The resulting parameter space is constrained and dependent on the data,
complicating Bayesian computation and analysis. We have resorted to Monte Carlo methods to solve these problems. 相似文献
15.
在分析政府形象效能评价系统和信息置信度的基础上,建立了基于信度函数的政府形象评价模型。运用该函数建模分析了政府形象评估中的理念识别系统、行为识别系统、视觉识别系统、环境识别系统以及个人识别系统等五大不确定性评价问题。经过理论分析,该函数对政府形象的评估有较大的参考价值。此外,该函数也可以用于对其他类似复杂性系统的效能评估,具有普遍适用性。 相似文献
16.
We used two statistical methods to identify prognostic factors: a log-linear model (logistic and COX regression, based on the notions of linearity and multiplicative relative risk), and the CORICO method (ICOnography of CORrelations) based on the geometric significance of the correlation coefficient. We applied the methods to two different situations (a "case-control study' and a "historical cohort'). We show that the geometric exploratory tool is particularly suited to the analysis of small samples with a large number of variables. It could save time when setting up new study protocols. In this instance, the geometric approach highlighted, without preconceived ideas, the potential role of multihormonality in the course of pituitary adenoma and the unexpected influence of the date of tumour excision on the risk attached to haemorrhage. 相似文献
17.
Jan C. H. van Eijkeren 《Risk analysis》2002,22(1):159-173
A mechanistic model is presented describing the clearance of a compound in a precision-cut liver slice that is incubated in a culture medium. The problem of estimating metabolic rate constants in PBPK models from liver slice experiments is discussed using identifiability analysis. From the identifiability problem analysis, it appears that in addition to the clearance, the compound's free fraction in the slice and the diffusion rate of the exchange of the compound between culture medium and liver slice should be identified. In addition, knowledge of the culture medium volume, the slice volume, the compound's free fraction, and octanol-water-based partition between medium and slice is presupposed. The formal solution for identification is discussed from the perspective of experimental practice. A formally necessary condition for identification is the sampling of parent compound in liver slice or culture medium. However, due to experimental limitations and errors, sampling the parent compound in the slice together with additional sampling of metabolite pooled from the medium and the slice is required for identification in practice. Moreover, it appears that identification results are unreliable when the value of the intrinsic clearance exceeds the value of the diffusion coefficient, a condition to be verified a posteriori. 相似文献
18.
发展了Baksalary和Drygas提出的一般Gauss-Markov模型中线性充分性、最小充分性和完全性的概念,用Rao的最小二乘统一理论,给出了这些概念的刻划定理。 相似文献
19.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables. 相似文献
20.
Kepher Henry Makambi 《Statistical Methods and Applications》2002,11(1):127-138
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center
interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution.
For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type
I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely
on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance
levels closer to the nominal level compared with the standard procedure. 相似文献