全文获取类型
收费全文 | 1152篇 |
免费 | 34篇 |
专业分类
管理学 | 169篇 |
民族学 | 20篇 |
人才学 | 1篇 |
人口学 | 124篇 |
丛书文集 | 2篇 |
理论方法论 | 111篇 |
综合类 | 24篇 |
社会学 | 481篇 |
统计学 | 254篇 |
出版年
2024年 | 2篇 |
2023年 | 16篇 |
2022年 | 14篇 |
2021年 | 32篇 |
2020年 | 53篇 |
2019年 | 83篇 |
2018年 | 89篇 |
2017年 | 84篇 |
2016年 | 70篇 |
2015年 | 43篇 |
2014年 | 57篇 |
2013年 | 181篇 |
2012年 | 61篇 |
2011年 | 43篇 |
2010年 | 43篇 |
2009年 | 43篇 |
2008年 | 34篇 |
2007年 | 26篇 |
2006年 | 30篇 |
2005年 | 27篇 |
2004年 | 19篇 |
2003年 | 15篇 |
2002年 | 15篇 |
2001年 | 14篇 |
2000年 | 8篇 |
1999年 | 8篇 |
1998年 | 12篇 |
1997年 | 9篇 |
1996年 | 12篇 |
1995年 | 8篇 |
1994年 | 6篇 |
1993年 | 2篇 |
1992年 | 5篇 |
1991年 | 2篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1987年 | 3篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1971年 | 1篇 |
1969年 | 1篇 |
1968年 | 2篇 |
排序方式: 共有1186条查询结果,搜索用时 15 毫秒
11.
We propose a hidden Markov model for longitudinal count data where sources of unobserved heterogeneity arise, making data overdispersed. The observed process, conditionally on the hidden states, is assumed to follow an inhomogeneous Poisson kernel, where the unobserved heterogeneity is modeled in a generalized linear model (GLM) framework by adding individual-specific random effects in the link function. Due to the complexity of the likelihood within the GLM framework, model parameters may be estimated by numerical maximization of the log-likelihood function or by simulation methods; we propose a more flexible approach based on the Expectation Maximization (EM) algorithm. Parameter estimation is carried out using a non-parametric maximum likelihood (NPML) approach in a finite mixture context. Simulation results and two empirical examples are provided. 相似文献
12.
Agustín Hernández Bastida José María Pérez Sánchez 《Journal of applied statistics》2009,36(8):853-869
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature. 相似文献
13.
It sometimes occurs that one or more components of the data exert a disproportionate influence on the model estimation. We need a reliable tool for identifying such troublesome cases in order to decide either eliminate from the sample, when the data collect was badly realized, or otherwise take care on the use of the model because the results could be affected by such components. Since a measure for detecting influential cases in linear regression setting was proposed by Cook [Detection of influential observations in linear regression, Technometrics 19 (1977), pp. 15–18.], apart from the same measure for other models, several new measures have been suggested as single-case diagnostics. For most of them some cutoff values have been recommended (see [D.A. Belsley, E. Kuh, and R.E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, 2nd ed., John Wiley & Sons, New York, Chichester, Brisban, (2004).], for instance), however the lack of a quantile type cutoff for Cook's statistics has induced the analyst to deal only with index plots as worthy diagnostic tools. Focussed on logistic regression, the aim of this paper is to provide the asymptotic distribution of Cook's distance in order to look for a meaningful cutoff point for detecting influential and leverage observations. 相似文献
14.
Zdenk Fabin 《Journal of statistical planning and inference》2009,139(11):3773-3778
The second moment of recently introduced scalar inference function can be viewed as generalized Fisher information of the continuous probability distributions. In this paper we call it the t-information and give some possible applications of the new concept. 相似文献
15.
Inmaculada Arostegui Vicente Núñez-Antón José M. Quintana 《Journal of applied statistics》2013,40(3):563-582
The statistical analysis of patient-reported outcomes (PROs) as endpoints has shown to be of great practical relevance. The resulting scores or indexes from the questionnaires used to measure PROs could be treated as continuous or ordinal. The goal of this study is to propose and evaluate a recoding process of the scores, so that they can be treated as binomial outcomes and, therefore, analyzed using logistic regression with random effects. The general methodology of recoding is based on the observable values of the scores. In order to obtain an optimal recoding, the evaluation of the recoding method is tested for different values of the parameters of the binomial distribution and different probability distributions of the random effects. We illustrate, evaluate and validate the proposed method of recoding with the Short Form-36 (SF-36) Survey and real data. The optimal recoding approach is very useful and flexible. Moreover, it has a natural interpretation, not only for ordinal scores, but also for questionnaires with many dimensions and different profiles, where a common method of analysis is desired, such as the SF-36. 相似文献
16.
The relation between change points in multivariate surveillance is important but seldom considered. The sufficiency principle is here used to clarify the structure of some problems, to find efficient methods, and to determine appropriate evaluation metrics. We study processes where the changes occur simultaneously or with known time lags. The surveillance of spatial data is one example where known time lags can be of interest. A general version of a theorem for the sufficient reduction of processes that change with known time lags is given. A simulation study illustrates the benefits or the methods based on the sufficient statistics. 相似文献
17.
18.
ABSTRACTWe propose an extension of parametric product partition models. We name our proposal nonparametric product partition models because we associate a random measure instead of a parametric kernel to each set within a random partition. Our methodology does not impose any specific form on the marginal distribution of the observations, allowing us to detect shifts of behaviour even when dealing with heavy-tailed or skewed distributions. We propose a suitable loss function and find the partition of the data having minimum expected loss. We then apply our nonparametric procedure to multiple change-point analysis and compare it with PPMs and with other methodologies that have recently appeared in the literature. Also, in the context of missing data, we exploit the product partition structure in order to estimate the distribution function of each missing value, allowing us to detect change points using the loss function mentioned above. Finally, we present applications to financial as well as genetic data. 相似文献
19.
Willian Luís de Oliveira Carlos Alberto Ribeiro Diniz Maria Durbán 《统计学通讯:模拟与计算》2013,42(8):2359-2383
ABSTRACTA general class of models for discrete and/or continuous responses is proposed in which joint distributions are constructed via the conditional approach. It is assumed that the distributions of one response and of the other response given the first one belong to exponential family of distributions. Furthermore, the marginal means are related to the covariates by link functions and a dependency structure between the responses is inserted into the model. Estimation methods, diagnostic analysis and a simulation study considering a Bernoulli-exponential model, a particular case of the class, are presented. Finally, this model is used in a real data set. 相似文献
20.
In animal digestibility the proportion of degraded food along the time has usually been modeled as a normal random variable with mean a function of the time and the following three parameters: the proportion of degraded food almost instantaneously, remaining proportion of food to be degraded, and velocity of degradation. The estimation of these parameters has been carried out mainly from a frequentist viewpoint by using the asymptotic distribution of the maximum likelihood estimator. This may give inadmissible estimates, such as values outside of the range of the parameters. This drawback could not appear if a Bayesian approach were adopted. In this article an objective Bayesian analysis is developed and illustrated on real and simulated data. 相似文献