首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26275篇
  免费   766篇
  国内免费   2篇
管理学   3554篇
民族学   161篇
人才学   1篇
人口学   2300篇
丛书文集   98篇
教育普及   2篇
理论方法论   2361篇
综合类   244篇
社会学   12775篇
统计学   5547篇
  2023年   165篇
  2022年   114篇
  2021年   175篇
  2020年   446篇
  2019年   662篇
  2018年   772篇
  2017年   1030篇
  2016年   792篇
  2015年   527篇
  2014年   747篇
  2013年   4819篇
  2012年   920篇
  2011年   887篇
  2010年   660篇
  2009年   575篇
  2008年   713篇
  2007年   650篇
  2006年   685篇
  2005年   569篇
  2004年   559篇
  2003年   483篇
  2002年   523篇
  2001年   644篇
  2000年   587篇
  1999年   562篇
  1998年   426篇
  1997年   373篇
  1996年   371篇
  1995年   372篇
  1994年   371篇
  1993年   337篇
  1992年   396篇
  1991年   406篇
  1990年   383篇
  1989年   342篇
  1988年   313篇
  1987年   278篇
  1986年   310篇
  1985年   322篇
  1984年   297篇
  1983年   252篇
  1982年   221篇
  1981年   181篇
  1980年   211篇
  1979年   228篇
  1978年   181篇
  1977年   165篇
  1976年   121篇
  1975年   136篇
  1974年   127篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
992.
Summary.  We propose covariance-regularized regression, a family of methods for prediction in high dimensional settings that uses a shrunken estimate of the inverse covariance matrix of the features to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing the log-likelihood of the data, under a multivariate normal model, subject to a penalty; it is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyse gene expression data sets with multiple class and survival outcomes.  相似文献   
993.
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens’ failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum–Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.  相似文献   
994.
Semiparametric regression models that use spline basis functions with penalization have graphical model representations. This link is more powerful than previously established mixed model representations of semiparametric regression, as a larger class of models can be accommodated. Complications such as missingness and measurement error are more naturally handled within the graphical model architecture. Directed acyclic graphs, also known as Bayesian networks, play a prominent role. Graphical model-based Bayesian 'inference engines', such as bugs and vibes , facilitate fitting and inference. Underlying these are Markov chain Monte Carlo schemes and recent developments in variational approximation theory and methodology.  相似文献   
995.
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach.  相似文献   
996.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
997.
The present study deals with the method of estimation of the parameters of k-components load-sharing parallel system model in which each component’s failure time distribution is assumed to be geometric. The maximum likelihood estimates of the load-share parameters with their standard errors are obtained. (1 − γ) 100% joint, Bonferroni simultaneous and two bootstrap confidence intervals for the parameters have been constructed. Further, recognizing the fact that life testing experiments are time consuming, it seems realistic to consider the load-share parameters to be random variable. Therefore, Bayes estimates along with their standard errors of the parameters are obtained by assuming Jeffrey’s invariant and gamma priors for the unknown parameters. Since, Bayes estimators can not be found in closed form expressions, Tierney and Kadane’s approximation method have been used to compute Bayes estimates and standard errors of the parameters. Markov Chain Monte Carlo technique such as Gibbs sampler is also used to obtain Bayes estimates and highest posterior density credible intervals of the load-share parameters. Metropolis–Hastings algorithm is used to generate samples from the posterior distributions of the unknown parameters.  相似文献   
998.
A strictly nonparametric bivariate test for two sample location problem is proposed. The proposed test is easy to apply and does not require the stringent condition of affine-symmetry or elliptical symmetry which is required by some of the major tests available for the same problem. The power function of the proposed test is calculated. The asymptotic distribution of the proposed test statistic is found to be normal. The power of proposed test is compared with some of the well-known tests under various distributions using Monte Carlo simulation technique. The power study shows that the proposed test statistic performs better than most of the test statistics for almost all the distributions considered here. As soon as the underlying population structure deviates from normality, the ability of the proposed test statistic to detect the smallest shift in location increases as compared to its competitors. The application of the test is shown by using a data set.  相似文献   
999.
We propose a phase I clinical trial design that seeks to determine the cumulative safety of a series of administrations of a fixed dose of an investigational agent. In contrast with traditional phase I trials that are designed solely to find the maximum tolerated dose of the agent, our design instead identifies a maximum tolerated schedule that includes a maximum tolerated dose as well as a vector of recommended administration times. Our model is based on a non-mixture cure model that constrains the probability of dose limiting toxicity for all patients to increase monotonically with both dose and the number of administrations received. We assume a specific parametric hazard function for each administration and compute the total hazard of dose limiting toxicity for a schedule as a sum of individual administration hazards. Throughout a variety of settings motivated by an actual study in allogeneic bone marrow transplant recipients, we demonstrate that our approach has excellent operating characteristics and performs as well as the only other currently published design for schedule finding studies. We also present arguments for the preference of our non-mixture cure model over the existing model.  相似文献   
1000.
This installment of "Serials Spoken Here" covers events that transpired between late September and late October 2008. Reported herein are two Webinars, one on ONIX for Serials, the other on SUSHI, and two conferences: the eighty-fourth annual Meeting of the Potomac Technical Processing Librarians and the New England Library Association's Annual Conference.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号