全文获取类型
收费全文 | 2667篇 |
免费 | 49篇 |
国内免费 | 4篇 |
专业分类
管理学 | 72篇 |
人口学 | 21篇 |
丛书文集 | 24篇 |
理论方法论 | 8篇 |
综合类 | 105篇 |
社会学 | 29篇 |
统计学 | 2461篇 |
出版年
2023年 | 11篇 |
2022年 | 11篇 |
2021年 | 28篇 |
2020年 | 52篇 |
2019年 | 90篇 |
2018年 | 122篇 |
2017年 | 195篇 |
2016年 | 69篇 |
2015年 | 80篇 |
2014年 | 86篇 |
2013年 | 979篇 |
2012年 | 223篇 |
2011年 | 78篇 |
2010年 | 72篇 |
2009年 | 62篇 |
2008年 | 57篇 |
2007年 | 51篇 |
2006年 | 43篇 |
2005年 | 44篇 |
2004年 | 47篇 |
2003年 | 27篇 |
2002年 | 39篇 |
2001年 | 35篇 |
2000年 | 18篇 |
1999年 | 24篇 |
1998年 | 37篇 |
1997年 | 25篇 |
1996年 | 11篇 |
1995年 | 9篇 |
1994年 | 5篇 |
1993年 | 8篇 |
1992年 | 8篇 |
1991年 | 6篇 |
1990年 | 7篇 |
1989年 | 3篇 |
1988年 | 9篇 |
1987年 | 3篇 |
1986年 | 3篇 |
1985年 | 10篇 |
1984年 | 5篇 |
1983年 | 11篇 |
1982年 | 4篇 |
1980年 | 3篇 |
1979年 | 2篇 |
1978年 | 1篇 |
1977年 | 3篇 |
1976年 | 2篇 |
1975年 | 1篇 |
1973年 | 1篇 |
排序方式: 共有2720条查询结果,搜索用时 15 毫秒
21.
Jrn Schulz Jan Terje Kvaly Kjersti Engan Trygve Eftestl Samwel Jatosh Hussein Kidanto Hege Ersdal 《Journal of applied statistics》2020,47(11):1915
This article considers the analysis of complex monitored health data, where often one or several signals are reflecting the current health status that can be represented by a finite number of states, in addition to a set of covariates. In particular, we consider a novel application of a non-parametric state intensity regression method in order to study time-dependent effects of covariates on the state transition intensities. The method can handle baseline, time varying as well as dynamic covariates. Because of the non-parametric nature, the method can handle different data types and challenges under minimal assumptions. If the signal that is reflecting the current health status is of continuous nature, we propose the application of a weighted median and a hysteresis filter as data pre-processing steps in order to facilitate robust analysis. In intensity regression, covariates can be aggregated by a suitable functional form over a time history window. We propose to study the estimated cumulative regression parameters for different choices of the time history window in order to investigate short- and long-term effects of the given covariates. The proposed framework is discussed and applied to resuscitation data of newborns collected in Tanzania. 相似文献
22.
In recent years, a variety of regression models, including zero-inflated and hurdle versions, have been proposed to explain the case of a dependent variable with respect to exogenous covariates. Apart from the classical Poisson, negative binomial and generalised Poisson distributions, many proposals have appeared in the statistical literature, perhaps in response to the new possibilities offered by advanced software that now enables researchers to implement numerous special functions in a relatively simple way. However, we believe that a significant research gap remains, since very little attention has been paid to the quasi-binomial distribution, which was first proposed over fifty years ago. We believe this distribution might constitute a valid alternative to existing regression models, in situations in which the variable has bounded support. Therefore, in this paper we present a zero-inflated regression model based on the quasi-binomial distribution, taking into account the moments and maximum likelihood estimators, and perform a score test to compare the zero-inflated quasi-binomial distribution with the zero-inflated binomial distribution, and the zero-inflated model with the homogeneous model (the model in which covariates are not considered). This analysis is illustrated with two data sets that are well known in the statistical literature and which contain a large number of zeros. 相似文献
23.
This paper addresses the problems of frequentist and Bayesian estimation for the unknown parameters of generalized Lindley distribution based on lower record values. We first derive the exact explicit expressions for the single and product moments of lower record values, and then use these results to compute the means, variances and covariance between two lower record values. We next obtain the maximum likelihood estimators and associated asymptotic confidence intervals. Furthermore, we obtain Bayes estimators under the assumption of gamma priors on both the shape and the scale parameters of the generalized Lindley distribution, and associated the highest posterior density interval estimates. The Bayesian estimation is studied with respect to both symmetric (squared error) and asymmetric (linear-exponential (LINEX)) loss functions. Finally, we compute Bayesian predictive estimates and predictive interval estimates for the future record values. To illustrate the findings, one real data set is analyzed, and Monte Carlo simulations are performed to compare the performances of the proposed methods of estimation and prediction. 相似文献
24.
In this study, the components of extra-Poisson variability are estimated assuming random effect models under a Bayesian approach. A standard existing methodology to estimate extra-Poisson variability assumes a negative binomial distribution. The obtained results show that using the proposed random effect model it is possible to get more accurate estimates for the extra-Poisson variability components when compared to the use of a negative binomial distribution where it is possible to estimate only one component of extra-Poisson variability. Some illustrative examples are introduced considering real data sets. 相似文献
25.
Kimberly F. Sellers Derek S. Young 《Journal of Statistical Computation and Simulation》2019,89(9):1649-1673
While excess zeros are often thought to cause data over-dispersion (i.e. when the variance exceeds the mean), this implication is not absolute. One should instead consider a flexible class of distributions that can address data dispersion along with excess zeros. This work develops a zero-inflated sum-of-Conway-Maxwell-Poissons (ZISCMP) regression as a flexible analysis tool to model count data that express significant data dispersion and contain excess zeros. This class of models contains several special case zero-inflated regressions, including zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), zero-inflated binomial (ZIB), and the zero-inflated Conway-Maxwell-Poisson (ZICMP). Through simulated and real data examples, we demonstrate class flexibility and usefulness. We further utilize it to analyze shark species data from Australia's Great Barrier Reef to assess the environmental impact of human action on the number of various species of sharks. 相似文献
26.
Merve Kandemir Çetinkaya Selahattin Kaçıranlar 《Journal of Statistical Computation and Simulation》2019,89(14):2645-2660
Negative binomial regression (NBR) and Poisson regression (PR) applications have become very popular in the analysis of count data in recent years. However, if there is a high degree of relationship between the independent variables, the problem of multicollinearity arises in these models. We introduce new two-parameter estimators (TPEs) for the NBR and the PR models by unifying the two-parameter estimator (TPE) of Özkale and Kaç?ranlar [The restricted and unrestricted two-parameter estimators. Commun Stat Theory Methods. 2007;36:2707–2725]. These new estimators are general estimators which include maximum likelihood (ML) estimator, ridge estimator (RE), Liu estimator (LE) and contraction estimator (CE) as special cases. Furthermore, biasing parameters of these estimators are given and a Monte Carlo simulation is done to evaluate the performance of these estimators using mean square error (MSE) criterion. The benefits of the new TPEs are also illustrated in an empirical application. The results show that the new proposed TPEs for the NBR and the PR models are better than the ML estimator, the RE and the LE. 相似文献
27.
Double robust estimators have double the chance of being a consistent estimator of a causal effect in binary treatments cases. In this paper, we proposed an estimator of a causal effect for general treatment regimes based on covariate-balancing. Under parametrical situation, our estimator has double robustness. 相似文献
28.
The main objective of this work is to evaluate the performance of confidence intervals, built using the deviance statistic, for the hyperparameters of state space models. The first procedure is a marginal approximation to confidence regions, based on the likelihood test, and the second one is based on the signed root deviance profile. Those methods are computationally efficient and are not affected by problems such as intervals with limits outside the parameter space, which can be the case when the focus is on the variances of the errors. The procedures are compared to the usual approaches existing in the literature, which includes the method based on the asymptotic distribution of the maximum likelihood estimator, as well as bootstrap confidence intervals. The comparison is performed via a Monte Carlo study, in order to establish empirically the advantages and disadvantages of each method. The results show that the methods based on the deviance statistic possess a better coverage rate than the asymptotic and bootstrap procedures. 相似文献
29.
In this paper, we analytically derive the exact formula for the mean squared error (MSE) of two weighted average (WA) estimators for each individual regression coefficient. Further, we execute numerical evaluations to investigate small sample properties of the WA estimators, and compare the MSE performance of the WA estimators with the other shrinkage estimators and the usual OLS estimator. Our numerical results show that (1) the WA estimators have smaller MSE than the other shrinkage estimators and the OLS estimator over a wide region of parameter space; (2) the range where the relative MSE of the WA estimator is smaller than that of the OLS estimator gets narrower as the number of explanatory variables k increases. 相似文献
30.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献