全文获取类型
收费全文 | 10871篇 |
免费 | 15篇 |
专业分类
管理学 | 1564篇 |
民族学 | 104篇 |
人口学 | 2444篇 |
丛书文集 | 3篇 |
理论方法论 | 530篇 |
综合类 | 291篇 |
社会学 | 4696篇 |
统计学 | 1254篇 |
出版年
2022年 | 1篇 |
2021年 | 7篇 |
2020年 | 10篇 |
2019年 | 10篇 |
2018年 | 1674篇 |
2017年 | 1675篇 |
2016年 | 1085篇 |
2015年 | 49篇 |
2014年 | 50篇 |
2013年 | 125篇 |
2012年 | 333篇 |
2011年 | 1159篇 |
2010年 | 1058篇 |
2009年 | 794篇 |
2008年 | 831篇 |
2007年 | 1016篇 |
2006年 | 17篇 |
2005年 | 235篇 |
2004年 | 262篇 |
2003年 | 221篇 |
2002年 | 91篇 |
2001年 | 13篇 |
2000年 | 23篇 |
1999年 | 16篇 |
1998年 | 3篇 |
1997年 | 1篇 |
1996年 | 32篇 |
1995年 | 3篇 |
1994年 | 6篇 |
1993年 | 6篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 5篇 |
1989年 | 5篇 |
1988年 | 11篇 |
1987年 | 7篇 |
1986年 | 6篇 |
1985年 | 4篇 |
1984年 | 6篇 |
1983年 | 6篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 6篇 |
1979年 | 1篇 |
1978年 | 2篇 |
1975年 | 1篇 |
1974年 | 3篇 |
1973年 | 1篇 |
1972年 | 1篇 |
1966年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 46 毫秒
941.
This paper considers the problem of modeling migraine severity assessments and their dependence on weather and time characteristics.
We take on the viewpoint of a patient who is interested in an individual migraine management strategy. Since factors influencing
migraine can differ between patients in number and magnitude, we show how a patient’s headache calendar reporting the severity
measurements on an ordinal scale can be used to determine the dominating factors for this special patient. One also has to
account for dependencies among the measurements. For this the autoregressive ordinal probit (AOP) model of Müller and Czado
(J Comput Graph Stat 14: 320–338, 2005) is utilized and fitted to a single patient’s migraine data by a grouped move multigrid Monte Carlo (GM-MGMC) Gibbs sampler.
Initially, covariates are selected using proportional odds models. Model fit and model comparison are discussed. A comparison
with proportional odds specifications shows that the AOP models are preferred. 相似文献
942.
Approximate Bayesian inference on the basis of summary statistics is well-suited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machine-learning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the state-of-the-art approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model. 相似文献
943.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data. 相似文献
944.
One method of assessing the fit of an event history model is to plot the empirical standard deviation of standardised martingale
residuals. We develop an alternative procedure which is valid also in the presence of measurement error and applicable to
both longitudinal and recurrent event data. Since the covariance between martingale residuals at times t
0 and t > t
0 is independent of t, a plot of these covariances should, for fixed t
0, have no time trend. A test statistic is developed from the increments in the estimated covariances, and we investigate its
properties under various types of model misspecification. Applications of the approach are presented using two Brazilian studies
measuring daily prevalence and incidence of infant diarrhoea and a longitudinal study into treatment of schizophrenia. 相似文献
945.
Mohamed Hebiri 《Statistics and Computing》2010,20(2):253-266
Conformal predictors, introduced by Vovk et al. (Algorithmic Learning in a Random World, Springer, New York, 2005), serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. We propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if the total number of covariates is very large. Our approach is based on combining the principle of conformal prediction with the ℓ 1 penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter ε>0 and has a coverage probability larger than or equal to 1−ε. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a by-product of the proposed approach, we provide a data-driven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated and real data. 相似文献
946.
This paper considers settings where populations of units may experience recurrent events, termed failures for convenience, and where the units are subject to varying levels of usage. We provide joint models for the recurrent events and usage processes, which facilitate analysis of their relationship as well as prediction of failures. Data on usage are often incomplete and we show how to implement maximum likelihood estimation in such cases. Random effects models with linear usage processes and gamma usage processes are considered in some detail. Data on automobile warranty claims are used to illustrate the proposed models and estimation methodology. 相似文献
947.
948.
Recently, the orthodox best linear unbiased predictor (BLUP) method was introduced for inference about random effects in Tweedie mixed models. With the use of h-likelihood, we illustrate that the standard likelihood procedures, developed for inference about fixed unknown parameters, can be used for inference about random effects. We show that the necessary standard error for the prediction interval of the random effect can be computed from the Hessian matrix of the h-likelihood. We also show numerically that the h-likelihood provides a prediction interval that maintains a more precise coverage probability than the BLUP method. 相似文献
949.
Friedrich Leisch 《Statistics and Computing》2010,20(4):457-469
Centroid-based partitioning cluster analysis is a popular method for segmenting data into more homogeneous subgroups. Visualization
can help tremendously to understand the positions of these subgroups relative to each other in higher dimensional spaces and
to assess the quality of partitions. In this paper we present several improvements on existing cluster displays using neighborhood
graphs with edge weights based on cluster separation and convex hulls of inner and outer cluster regions. A new display called
shadow-stars can be used to diagnose pairwise cluster separation with respect to the distribution of the original data. Artificial
data and two case studies with real data are used to demonstrate the techniques. 相似文献
950.
We present a new semi-parametric model for the prediction of implied volatility surfaces that can be estimated using machine
learning algorithms. Given a reasonable starting model, a boosting algorithm based on regression trees sequentially minimizes
generalized residuals computed as differences between observed and estimated implied volatilities. To overcome the poor predictive
power of existing models, we include a grid in the region of interest, and implement a cross-validation strategy to find an
optimal stopping value for the boosting procedure. Back testing the out-of-sample performance on a large data set of implied
volatilities from S&P 500 options, we provide empirical evidence of the strong predictive power of our model. 相似文献