全文获取类型
收费全文 | 1724篇 |
免费 | 51篇 |
专业分类
管理学 | 219篇 |
民族学 | 25篇 |
人才学 | 1篇 |
人口学 | 204篇 |
丛书文集 | 2篇 |
理论方法论 | 167篇 |
综合类 | 31篇 |
社会学 | 736篇 |
统计学 | 390篇 |
出版年
2024年 | 5篇 |
2023年 | 20篇 |
2022年 | 26篇 |
2021年 | 47篇 |
2020年 | 70篇 |
2019年 | 114篇 |
2018年 | 150篇 |
2017年 | 149篇 |
2016年 | 116篇 |
2015年 | 61篇 |
2014年 | 101篇 |
2013年 | 290篇 |
2012年 | 98篇 |
2011年 | 60篇 |
2010年 | 60篇 |
2009年 | 54篇 |
2008年 | 47篇 |
2007年 | 35篇 |
2006年 | 37篇 |
2005年 | 28篇 |
2004年 | 31篇 |
2003年 | 18篇 |
2002年 | 21篇 |
2001年 | 19篇 |
2000年 | 11篇 |
1999年 | 10篇 |
1998年 | 15篇 |
1997年 | 10篇 |
1996年 | 13篇 |
1995年 | 12篇 |
1994年 | 6篇 |
1993年 | 2篇 |
1992年 | 8篇 |
1991年 | 3篇 |
1989年 | 2篇 |
1988年 | 2篇 |
1987年 | 3篇 |
1985年 | 3篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1971年 | 2篇 |
1969年 | 1篇 |
1968年 | 2篇 |
1964年 | 1篇 |
排序方式: 共有1775条查询结果,搜索用时 15 毫秒
11.
E. Ayuga T llez A.J. Martí n Fern ndez C. Gonz lez Garcí a E. Martí nez Falero 《Journal of applied statistics》2006,33(8):819-836
The aim of this paper is to describe a simulation procedure to compare parametric regression against a non-parametric regression method, for different functions and sets of information. The proposed methodology improves lack of fit at the edges of the regression curves, and an acceptable result is obtained for the no-parametric estimation in all studied cases. Larger differences appear at the edges of the estimation. The results are applied to the study of dasometric variables, which do not fulfil the normality hypothesis needed for parametric estimation. The kernel regression shows the relationship between the studied variables, which would not be detected with more rigid parametric models. 相似文献
12.
Applications of multivariate statistics in engineering are hard to find, apart from those in quality control. However, we think that further insight into some technological cases may be gained by using adequate multivariate analysis tools. In this paper, we propose a review of the key parameters of rotating electric machines with factor analysis. This statistical technique allows not only the reduction of the dimension of the case we are analysing, but also reveals subtle relationships between the variables under study. We show an application of this methodology by studying the interrelations between the key variables in an electric machine, in this case the squirrel-cage induction motor. Through a step-by-step presentation of the case study, we deal with some of the topics an applied researcher may face, such as the rotation of the original factors, the extraction of higher-order factors and the development of the exploratory model. As a result, we present a worthwhile framework to both confirm our previous knowledge and capture unexplored facts. Moreover, it may provide a new approach to describing and understanding the design, performance and operating characteristics of these machines. 相似文献
13.
Bengt Muthén Tihomir Asparouhov 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(3):639-657
Summary. A two-level regression mixture model is discussed and contrasted with the conventional two-level regression model. Simulated and real data shed light on the modelling alternatives. The real data analyses investigate gender differences in mathematics achievement from the US National Education Longitudinal Survey. The two-level regression mixture analyses show that unobserved heterogeneity should not be presupposed to exist only at level 2 at the expense of level 1. Both the simulated and the real data analyses show that level 1 heterogeneity in the form of latent classes can be mistaken for level 2 heterogeneity in the form of the random effects that are used in conventional two-level regression analysis. Because of this, mixture models have an important role to play in multilevel regression analyses. Mixture models allow heterogeneity to be investigated more fully, more correctly attributing different portions of the heterogeneity to the different levels. 相似文献
14.
We proposed a modification to the variant of link-tracing sampling suggested by Félix-Medina and Thompson [M.H. Félix-Medina, S.K. Thompson, Combining cluster sampling and link-tracing sampling to estimate the size of hidden populations, Journal of Official Statistics 20 (2004) 19–38] that allows the researcher to have certain control of the final sample size, precision of the estimates or other characteristics of the sample that the researcher is interested in controlling. We achieve this goal by selecting an initial sequential sample of sites instead of an initial simple random sample of sites as those authors suggested. We estimate the population size by means of the maximum likelihood estimators suggested by the above-mentioned authors or by the Bayesian estimators proposed by Félix-Medina and Monjardin [M.H. Félix-Medina, P.E. Monjardin, Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations: A Bayesian-assisted approach, Survey Methodology 32 (2006) 187–195]. Variances are estimated by means of jackknife and bootstrap estimators as well as by the delta estimators proposed in the two above-mentioned papers. Interval estimates of the population size are obtained by means of Wald and bootstrap confidence intervals. The results of an exploratory simulation study indicate good performance of the proposed sampling strategy. 相似文献
15.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided. 相似文献
16.
Agustín Hernández Bastida José María Pérez Sánchez 《Journal of applied statistics》2009,36(8):853-869
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature. 相似文献
17.
It sometimes occurs that one or more components of the data exert a disproportionate influence on the model estimation. We need a reliable tool for identifying such troublesome cases in order to decide either eliminate from the sample, when the data collect was badly realized, or otherwise take care on the use of the model because the results could be affected by such components. Since a measure for detecting influential cases in linear regression setting was proposed by Cook [Detection of influential observations in linear regression, Technometrics 19 (1977), pp. 15–18.], apart from the same measure for other models, several new measures have been suggested as single-case diagnostics. For most of them some cutoff values have been recommended (see [D.A. Belsley, E. Kuh, and R.E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, 2nd ed., John Wiley & Sons, New York, Chichester, Brisban, (2004).], for instance), however the lack of a quantile type cutoff for Cook's statistics has induced the analyst to deal only with index plots as worthy diagnostic tools. Focussed on logistic regression, the aim of this paper is to provide the asymptotic distribution of Cook's distance in order to look for a meaningful cutoff point for detecting influential and leverage observations. 相似文献
18.
Zdenk Fabin 《Journal of statistical planning and inference》2009,139(11):3773-3778
The second moment of recently introduced scalar inference function can be viewed as generalized Fisher information of the continuous probability distributions. In this paper we call it the t-information and give some possible applications of the new concept. 相似文献
19.
Inmaculada Arostegui Vicente Núñez-Antón José M. Quintana 《Journal of applied statistics》2013,40(3):563-582
The statistical analysis of patient-reported outcomes (PROs) as endpoints has shown to be of great practical relevance. The resulting scores or indexes from the questionnaires used to measure PROs could be treated as continuous or ordinal. The goal of this study is to propose and evaluate a recoding process of the scores, so that they can be treated as binomial outcomes and, therefore, analyzed using logistic regression with random effects. The general methodology of recoding is based on the observable values of the scores. In order to obtain an optimal recoding, the evaluation of the recoding method is tested for different values of the parameters of the binomial distribution and different probability distributions of the random effects. We illustrate, evaluate and validate the proposed method of recoding with the Short Form-36 (SF-36) Survey and real data. The optimal recoding approach is very useful and flexible. Moreover, it has a natural interpretation, not only for ordinal scores, but also for questionnaires with many dimensions and different profiles, where a common method of analysis is desired, such as the SF-36. 相似文献
20.
Mariantonietta Ruggieri Antonella Plaia Francesca Di Salvo Gianna Agró 《Journal of applied statistics》2013,40(4):795-807
The knowledge of the urban air quality represents the first step to face air pollution issues. For the last decades many cities can rely on a network of monitoring stations recording concentration values for the main pollutants. This paper focuses on functional principal component analysis (FPCA) to investigate multiple pollutant datasets measured over time at multiple sites within a given urban area. Our purpose is to extend what has been proposed in the literature to data that are multisite and multivariate at the same time. The approach results to be effective to highlight some relevant statistical features of the time series, giving the opportunity to identify significant pollutants and to know the evolution of their variability along time. The paper also deals with missing value issue. As it is known, very long gap sequences can often occur in air quality datasets, due to long time failures not easily solvable or to data coming from a mobile monitoring station. In the considered dataset, large and continuous gaps are imputed by empirical orthogonal function procedure, after denoising raw data by functional data analysis and before performing FPCA, in order to further improve the reconstruction. 相似文献