全文获取类型
收费全文 | 11282篇 |
免费 | 36篇 |
专业分类
管理学 | 1616篇 |
民族学 | 103篇 |
人才学 | 1篇 |
人口学 | 2477篇 |
丛书文集 | 7篇 |
理论方法论 | 605篇 |
综合类 | 299篇 |
社会学 | 4904篇 |
统计学 | 1306篇 |
出版年
2023年 | 3篇 |
2022年 | 9篇 |
2021年 | 6篇 |
2020年 | 20篇 |
2019年 | 31篇 |
2018年 | 1677篇 |
2017年 | 1681篇 |
2016年 | 1104篇 |
2015年 | 57篇 |
2014年 | 69篇 |
2013年 | 187篇 |
2012年 | 361篇 |
2011年 | 1181篇 |
2010年 | 1073篇 |
2009年 | 811篇 |
2008年 | 849篇 |
2007年 | 1028篇 |
2006年 | 40篇 |
2005年 | 257篇 |
2004年 | 282篇 |
2003年 | 237篇 |
2002年 | 101篇 |
2001年 | 23篇 |
2000年 | 24篇 |
1999年 | 20篇 |
1998年 | 12篇 |
1997年 | 14篇 |
1996年 | 43篇 |
1995年 | 12篇 |
1994年 | 8篇 |
1993年 | 8篇 |
1992年 | 8篇 |
1991年 | 6篇 |
1990年 | 3篇 |
1989年 | 5篇 |
1988年 | 10篇 |
1987年 | 5篇 |
1986年 | 4篇 |
1985年 | 6篇 |
1984年 | 3篇 |
1983年 | 8篇 |
1982年 | 5篇 |
1981年 | 6篇 |
1980年 | 4篇 |
1979年 | 5篇 |
1978年 | 5篇 |
1977年 | 2篇 |
1976年 | 1篇 |
1975年 | 1篇 |
1969年 | 3篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
971.
Mohamed Hebiri 《Statistics and Computing》2010,20(2):253-266
Conformal predictors, introduced by Vovk et al. (Algorithmic Learning in a Random World, Springer, New York, 2005), serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. We propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if the total number of covariates is very large. Our approach is based on combining the principle of conformal prediction with the ℓ 1 penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter ε>0 and has a coverage probability larger than or equal to 1−ε. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a by-product of the proposed approach, we provide a data-driven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated and real data. 相似文献
972.
This paper considers settings where populations of units may experience recurrent events, termed failures for convenience, and where the units are subject to varying levels of usage. We provide joint models for the recurrent events and usage processes, which facilitate analysis of their relationship as well as prediction of failures. Data on usage are often incomplete and we show how to implement maximum likelihood estimation in such cases. Random effects models with linear usage processes and gamma usage processes are considered in some detail. Data on automobile warranty claims are used to illustrate the proposed models and estimation methodology. 相似文献
973.
974.
Recently, the orthodox best linear unbiased predictor (BLUP) method was introduced for inference about random effects in Tweedie mixed models. With the use of h-likelihood, we illustrate that the standard likelihood procedures, developed for inference about fixed unknown parameters, can be used for inference about random effects. We show that the necessary standard error for the prediction interval of the random effect can be computed from the Hessian matrix of the h-likelihood. We also show numerically that the h-likelihood provides a prediction interval that maintains a more precise coverage probability than the BLUP method. 相似文献
975.
Friedrich Leisch 《Statistics and Computing》2010,20(4):457-469
Centroid-based partitioning cluster analysis is a popular method for segmenting data into more homogeneous subgroups. Visualization
can help tremendously to understand the positions of these subgroups relative to each other in higher dimensional spaces and
to assess the quality of partitions. In this paper we present several improvements on existing cluster displays using neighborhood
graphs with edge weights based on cluster separation and convex hulls of inner and outer cluster regions. A new display called
shadow-stars can be used to diagnose pairwise cluster separation with respect to the distribution of the original data. Artificial
data and two case studies with real data are used to demonstrate the techniques. 相似文献
976.
We present a new semi-parametric model for the prediction of implied volatility surfaces that can be estimated using machine
learning algorithms. Given a reasonable starting model, a boosting algorithm based on regression trees sequentially minimizes
generalized residuals computed as differences between observed and estimated implied volatilities. To overcome the poor predictive
power of existing models, we include a grid in the region of interest, and implement a cross-validation strategy to find an
optimal stopping value for the boosting procedure. Back testing the out-of-sample performance on a large data set of implied
volatilities from S&P 500 options, we provide empirical evidence of the strong predictive power of our model. 相似文献
977.
Yanqing Sun 《Lifetime data analysis》2010,16(2):271-298
In a longitudinal study, an individual is followed up over a period of time. Repeated measurements on the response and some time-dependent covariates are taken at a series of sampling times. The sampling times are often irregular and depend on covariates. In this paper, we propose a sampling adjusted procedure for the estimation of the proportional mean model without having to specify a sampling model. Unlike existing procedures, the proposed method is robust to model misspecification of the sampling times. Large sample properties are investigated for the estimators of both regression coefficients and the baseline function. We show that the proposed estimation procedure is more efficient than the existing procedures. Large sample confidence intervals for the baseline function are also constructed by perturbing the estimation equations. A simulation study is conducted to examine the finite sample properties of the proposed estimators and to compare with some of the existing procedures. The method is illustrated with a data set from a recurrent bladder cancer study. 相似文献
978.
In this paper, the task of determining expected values of sample moments, where the sample members have been selected based on noisy information, is considered. This task is a recurring problem in the theory of evolution strategies. Exact expressions for expected values of sums of products of concomitants of selected order statistics are derived. Then, using Edgeworth and Cornish-Fisher approximations, explicit results that depend on coefficients that can be determined numerically are obtained. While the results are exact only for normal populations, it is shown experimentally that including skewness and kurtosis in the calculations can yield greatly improved results for other distributions. 相似文献
979.
Sven Knoth 《Statistics and Computing》2005,15(4):341-352
Originally, the exponentially weighted moving average (EWMA) control chart was developed for detecting changes in the process mean. The average run length (ARL) became the most popular performance measure for schemes with this objective. When monitoring the mean of independent and normally distributed observations the ARL can be determined with high precision. Nowadays, EWMA control charts are also used for monitoring the variance. Charts based on the sample variance S2 are an appropriate choice. The usage of ARL evaluation techniques known from mean monitoring charts, however, is difficult. The most accurate method—solving a Fredholm integral equation with the Nyström method—fails due to an improper kernel in the case of chi-squared distributions. Here, we exploit the collocation method and the product Nyström method. These methods are compared to Markov chain based approaches. We see that collocation leads to higher accuracy than currently established methods. 相似文献
980.
Using the data from the AIDS Link to Intravenous Experiences cohort study as an example, an informative censoring model was
used to characterize the repeated hospitalization process of a group of patients. Under the informative censoring assumption,
the estimators of the baseline rate function and the regression parameters were shown to be related to a latent variable.
Hence, it becomes impractical to directly estimate the unknown quantities in the moments of the estimators for the bandwidth
selection of a smoothing estimator and the construction of confidence intervals, which are respectively based on the asymptotic
mean squared errors and the asymptotic distributions of the estimators. To overcome these difficulties, we develop a random
weighted bootstrap procedure to select appropriate bandwidths and to construct approximated confidence intervals. One can
see that our method is simple and faster to implement from a practical point of view, and is at least as accurate as other
bootstrap methods. In this article, it is shown that the proposed method is useful through the performance of a Monte Carlo
simulation. An application of our procedure is also illustrated by a recurrent event sample of intravenous drug users for
inpatient cares over time. 相似文献