首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19630篇
  免费   611篇
  国内免费   1篇
管理学   2602篇
民族学   114篇
人才学   1篇
人口学   1732篇
丛书文集   74篇
教育普及   1篇
理论方法论   1778篇
综合类   181篇
社会学   9998篇
统计学   3761篇
  2023年   132篇
  2022年   80篇
  2021年   134篇
  2020年   349篇
  2019年   520篇
  2018年   613篇
  2017年   800篇
  2016年   615篇
  2015年   412篇
  2014年   577篇
  2013年   3536篇
  2012年   738篇
  2011年   695篇
  2010年   505篇
  2009年   408篇
  2008年   534篇
  2007年   487篇
  2006年   527篇
  2005年   412篇
  2004年   421篇
  2003年   353篇
  2002年   367篇
  2001年   469篇
  2000年   413篇
  1999年   404篇
  1998年   311篇
  1997年   266篇
  1996年   279篇
  1995年   288篇
  1994年   247篇
  1993年   256篇
  1992年   299篇
  1991年   296篇
  1990年   288篇
  1989年   254篇
  1988年   238篇
  1987年   202篇
  1986年   229篇
  1985年   249篇
  1984年   218篇
  1983年   193篇
  1982年   160篇
  1981年   131篇
  1980年   155篇
  1979年   170篇
  1978年   134篇
  1977年   114篇
  1976年   87篇
  1975年   91篇
  1974年   93篇
排序方式: 共有10000条查询结果,搜索用时 484 毫秒
751.
Semiparametric regression models that use spline basis functions with penalization have graphical model representations. This link is more powerful than previously established mixed model representations of semiparametric regression, as a larger class of models can be accommodated. Complications such as missingness and measurement error are more naturally handled within the graphical model architecture. Directed acyclic graphs, also known as Bayesian networks, play a prominent role. Graphical model-based Bayesian 'inference engines', such as bugs and vibes , facilitate fitting and inference. Underlying these are Markov chain Monte Carlo schemes and recent developments in variational approximation theory and methodology.  相似文献   
752.
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach.  相似文献   
753.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
754.
We propose a phase I clinical trial design that seeks to determine the cumulative safety of a series of administrations of a fixed dose of an investigational agent. In contrast with traditional phase I trials that are designed solely to find the maximum tolerated dose of the agent, our design instead identifies a maximum tolerated schedule that includes a maximum tolerated dose as well as a vector of recommended administration times. Our model is based on a non-mixture cure model that constrains the probability of dose limiting toxicity for all patients to increase monotonically with both dose and the number of administrations received. We assume a specific parametric hazard function for each administration and compute the total hazard of dose limiting toxicity for a schedule as a sum of individual administration hazards. Throughout a variety of settings motivated by an actual study in allogeneic bone marrow transplant recipients, we demonstrate that our approach has excellent operating characteristics and performs as well as the only other currently published design for schedule finding studies. We also present arguments for the preference of our non-mixture cure model over the existing model.  相似文献   
755.
Data envelopment analysis (DEA) is a deterministic econometric model for calculating efficiency by using data from an observed set of decision-making units (DMUs). We propose a method for calculating the distribution of efficiency scores. Our framework relies on estimating data from an unobserved set of DMUs. The model provides posterior predictive data for the unobserved DMUs to augment the frontier in the DEA that provides a posterior predictive distribution for the efficiency scores. We explore the method on a multiple-input and multiple-output DEA model. The data for the example are from a comprehensive examination of how nursing homes complete a standardized mandatory assessment of residents.  相似文献   
756.
757.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   
758.
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model. Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rrth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin et al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214–223]. The usefulness of these models is illustrated in a simulation study and in applications to three real data sets.  相似文献   
759.
In this note we provide a counterexample which resolves conjectures about Hadamard matrices made in this journal. Beder [1998. Conjectures about Hadamard matrices. Journal of Statistical Planning and Inference 72, 7–14] conjectured that if HH is a maximal m×nm×n row-Hadamard matrix then m is a multiple of 4; and that if n   is a power of 2 then every row-Hadamard matrix can be extended to a Hadamard matrix. Using binary integer programming we obtain a maximal 13×3213×32 row-Hadamard matrix, which disproves both conjectures. Additionally for n being a multiple of 4 up to 64, we tabulate values of m   for which we have found a maximal row-Hadamard matrix. Based on the tabulated results we conjecture that a m×nm×n row-Hadamard matrix with m?n-7m?n-7 can be extended to a Hadamard matrix.  相似文献   
760.
Robust tests for the common principal components model   总被引:1,自引:0,他引:1  
When dealing with several populations, the common principal components (CPC) model assumes equal principal axes but different variances along them. In this paper, a robust log-likelihood ratio statistic allowing to test the null hypothesis of a CPC model versus no restrictions on the scatter matrices is introduced. The proposal plugs into the classical log-likelihood ratio statistic robust scatter estimators. Using the same idea, a robust log-likelihood ratio and a robust Wald-type statistic for testing proportionality against a CPC model are considered. Their asymptotic distributions under the null hypothesis and their partial influence functions are derived. A small simulation study allows to compare the behavior of the classical and robust tests, under normal and contaminated data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号