首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9960篇
  免费   349篇
  国内免费   165篇
管理学   511篇
劳动科学   7篇
民族学   200篇
人才学   2篇
人口学   186篇
丛书文集   1587篇
理论方法论   396篇
综合类   6825篇
社会学   296篇
统计学   464篇
  2024年   31篇
  2023年   58篇
  2022年   168篇
  2021年   216篇
  2020年   169篇
  2019年   116篇
  2018年   153篇
  2017年   217篇
  2016年   163篇
  2015年   343篇
  2014年   431篇
  2013年   648篇
  2012年   621篇
  2011年   788篇
  2010年   773篇
  2009年   781篇
  2008年   707篇
  2007年   776篇
  2006年   740篇
  2005年   617篇
  2004年   388篇
  2003年   364篇
  2002年   416篇
  2001年   339篇
  2000年   193篇
  1999年   68篇
  1998年   29篇
  1997年   36篇
  1996年   31篇
  1995年   26篇
  1994年   16篇
  1993年   14篇
  1992年   15篇
  1991年   6篇
  1990年   5篇
  1989年   2篇
  1988年   5篇
  1985年   2篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
41.
Jing Yang  Fang Lu  Hu Yang 《Statistics》2013,47(6):1193-1211
The outer product of gradients (OPG) estimation procedure based on least squares (LS) approach has been presented by Xia et al. [An adaptive estimation of dimension reduction space. J Roy Statist Soc Ser B. 2002;64:363–410] to estimate the single-index parameter in partially linear single-index models (PLSIM). However, its asymptotic property has not been established yet and the efficiency of LS-based method can be significantly affected by outliers and heavy-tailed distributions. In this paper, we firstly derive the asymptotic property of OPG estimator developed by Xia et al. [An adaptive estimation of dimension reduction space. J Roy Statist Soc Ser B. 2002;64:363–410] in theory, and a novel robust estimation procedure combining the ideas of OPG and local rank (LR) inference is further developed for PLSIM along with its theoretical property. Then, we theoretically derive the asymptotic relative efficiency (ARE) of the proposed LR-based procedure with respect to LS-based method, which is shown to possess an expression that is closely related to that of the signed-rank Wilcoxon test in comparison with the t-test. Moreover, we demonstrate that the new proposed estimator has a great efficiency gain across a wide spectrum of non-normal error distributions and almost not lose any efficiency for the normal error. Even in the worst case scenarios, the ARE owns a lower bound equalling to 0.864 for estimating the single-index parameter and a lower bound being 0.8896 for estimating the nonparametric function respectively, versus the LS-based estimators. Finally, some Monte Carlo simulations and a real data analysis are conducted to illustrate the finite sample performance of the estimators.  相似文献   
42.
In this article, we use two efficient approaches to deal with the difficulty in computing the intractable integrals when implementing Gibbs sampling in the nonlinear mixed effects model (NLMM) based on Dirichlet processes (DP). In the first approach, we compute the Laplace's approximation to the integral for its high accuracy, low cost, and ease of implementation. The second approach uses the no-gaps algorithm of MacEachern and Müller (1998 MacEachern , S. , Müller , P. ( 1998 ). Estimating mixtures of Dirichlet process models . Journal of Computational and Graphical Statistics 7 : 223238 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) to perform Gibbs sampling without evaluating the difficult integral. We apply both approaches to real problems and simulations. Results show that both approaches perform well in density estimation and prediction and are superior to the parametric analysis in that they can detect important model features, such as skewness, long tails, and multimodality, whereas the parametric analysis cannot.  相似文献   
43.
Capacitance is a critical performance characteristic of high-voltage-pulse capacitor which is used to store and discharge electrical energy rapidly. The capacitors usually are stored for a long period of time before put into use. Experimental result and engineering experience indicate that the capacitance increases with storage time and will eventually exceed the failure threshold, which means that the capacitor may fail during storage. This is a typical mode of degradation failure for long storage products. Further, the capacitance degradation path can be extrapolated in several stages based on the shifting characteristics. That is, the capacitance increases slowly or fluctuates in the initial storage stage that lasts about three months. Then it increases sharply in the middle stage which lasts about four months. After the two stages, the capacitor enters into the third stage in which capacitance increases constantly. This degradation phenomenon motivates us to study the storage life prediction method based on multi-phase degradation path model. The storage performance degradation mechanism of high-voltage-pulse capacitor was investigated, which provides the physical basis for multi-phase Wiener degradation model. Identification procedure for the transition points in the degradation path was proposed using maximum likelihood principle (MLP). The result of Kruskal-Wallis test which is the method to test whether two populations are consistent or not in statistics showed that the transition points are statistically effective. Other parameters in the multi-phase degradation model are estimated with maximum likelihood estimation (MLE) after the transition points have been specified. The multi-phase Inverse Gaussian (IG) distribution for storage life was deduced for the capacitor, and the point and interval estimation procedure for reliable storage life are constructed with bootstrap method. The efficiency and effectiveness of the proposed multi-phase degradation model is compared with storage life prediction under single-phase condition.  相似文献   
44.
This article considers the adaptive lasso procedure for the accelerated failure time model with multiple covariates based on weighted least squares method, which uses Kaplan-Meier weights to account for censoring. The adaptive lasso method can complete the variable selection and model estimation simultaneously. Under some mild conditions, the estimator is shown to have sparse and oracle properties. We use Bayesian Information Criterion (BIC) for tuning parameter selection, and a bootstrap variance approach for standard error. Simulation studies and two real data examples are carried out to investigate the performance of the proposed method.  相似文献   
45.
Abstract

In this article, we study the variable selection and estimation for linear regression models with missing covariates. The proposed estimation method is almost as efficient as the popular least-squares-based estimation method for normal random errors and empirically shown to be much more efficient and robust with respect to heavy tailed errors or outliers in the responses and covariates. To achieve sparsity, a variable selection procedure based on SCAD is proposed to conduct estimation and variable selection simultaneously. The procedure is shown to possess the oracle property. To deal with the covariates missing, we consider the inverse probability weighted estimators for the linear model when the selection probability is known or unknown. It is shown that the estimator by using estimated selection probability has a smaller asymptotic variance than that with true selection probability, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for penalized rank estimator with the covariates missing in the linear model. Some numerical examples are provided to demonstrate the performance of the estimators.  相似文献   
46.
Abstract

The purpose of this paper is to develop a detection algorithm for the first jump point in sampling trajectories of jump-diffusions which are described as solutions of stochastic differential equations driven by α-stable white noise. This is done by a multivariate Lagrange interpolation approach. To this end, we utilize computer simulation algorithm in MATLAB to visualize the sampling trajectories of the jump-diffusions for various combinations of parameters arising in the modeling structure of stochastic differential equations.  相似文献   
47.
We consider the issue of sampling from the posterior distribution of exponential random graph (ERG) models and other statistical models with intractable normalizing constants. Existing methods based on exact sampling are either infeasible or require very long computing time. We study a class of approximate Markov chain Monte Carlo (MCMC) sampling schemes that deal with this issue. We also develop a new Metropolis–Hastings kernel to sample sparse large networks from ERG models. We illustrate the proposed methods on several examples.  相似文献   
48.
Despite the simplicity of the Bernoulli process, developing good confidence interval procedures for its parameter—the probability of success p—is deceptively difficult. The binary data yield a discrete number of successes from a discrete number of trials, n. This discreteness results in actual coverage probabilities that oscillate with the n for fixed values of p (and with p for fixed n). Moreover, this oscillation necessitates a large sample size to guarantee a good coverage probability when p is close to 0 or 1.

It is well known that the Wilson procedure is superior to many existing procedures because it is less sensitive to p than any other procedures, therefore it is less costly. The procedures proposed in this article work as well as the Wilson procedure when 0.1 ≤p ≤ 0.9, and are even less sensitive (i.e., more robust) than the Wilson procedure when p is close to 0 or 1. Specifically, when the nominal coverage probability is 0.95, the Wilson procedure requires a sample size 1, 021 to guarantee that the coverage probabilities stay above 0.92 for any 0.001 ≤ min {p, 1 ?p} <0.01. By contrast, our procedures guarantee the same coverage probabilities but only need a sample size 177 without increasing either the expected interval width or the standard deviation of the interval width.  相似文献   
49.
A pivotal quantity for a capture-recapture model is introduced and used to construct an asymptotic confidence region for (ε,N), where ε is the capture efficiency and N is the population size. The true confidence levels of certain regions are obtained by simulation. Certain confidence regions for (ε,N) are drawn to show the size of the regions and to show how confidence limits for N depend on ε.  相似文献   
50.
This paper investigates nonparametric estimation of density on [0, 1]. The kernel estimator of density on [0, 1] has been found to be sensitive to both bandwidth and kernel. This paper proposes a unified Bayesian framework for choosing both the bandwidth and kernel function. In a simulation study, the Bayesian bandwidth estimator performed better than others, and kernel estimators were sensitive to the choice of the kernel and the shapes of the population densities on [0, 1]. The simulation and empirical results demonstrate that the methods proposed in this paper can improve the way the probability densities on [0, 1] are presently estimated.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号