首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8783篇
  免费   228篇
  国内免费   69篇
管理学   377篇
民族学   28篇
人才学   4篇
人口学   66篇
丛书文集   619篇
理论方法论   140篇
综合类   4979篇
社会学   151篇
统计学   2716篇
  2024年   16篇
  2023年   45篇
  2022年   52篇
  2021年   65篇
  2020年   105篇
  2019年   188篇
  2018年   190篇
  2017年   306篇
  2016年   187篇
  2015年   233篇
  2014年   377篇
  2013年   1047篇
  2012年   655篇
  2011年   430篇
  2010年   415篇
  2009年   417篇
  2008年   446篇
  2007年   508篇
  2006年   533篇
  2005年   472篇
  2004年   407篇
  2003年   371篇
  2002年   316篇
  2001年   313篇
  2000年   214篇
  1999年   119篇
  1998年   112篇
  1997年   79篇
  1996年   65篇
  1995年   72篇
  1994年   66篇
  1993年   55篇
  1992年   49篇
  1991年   31篇
  1990年   29篇
  1989年   37篇
  1988年   20篇
  1987年   12篇
  1986年   6篇
  1985年   2篇
  1984年   2篇
  1983年   4篇
  1981年   4篇
  1980年   5篇
  1979年   1篇
  1978年   2篇
排序方式: 共有9080条查询结果,搜索用时 187 毫秒
171.
The Best Worst Method (BWM) is a multi-criteria decision-making method that uses two vectors of pairwise comparisons to determine the weights of criteria. First, the best (e.g. most desirable, most important), and the worst (e.g. least desirable, least important) criteria are identified by the decision-maker, after which the best criterion is compared to the other criteria, and the other criteria to the worst criterion. A non-linear minmax model is then used to identify the weights such that the maximum absolute difference between the weight ratios and their corresponding comparisons is minimized. The minmax model may result in multiple optimal solutions. Although, in some cases, decision-makers prefer to have multiple optimal solutions, in other cases they prefer to have a unique solution. The aim of this paper is twofold: firstly, we propose using interval analysis for the case of multiple optimal solutions, in which we show how the criteria can be weighed and ranked. Secondly, we propose a linear model for BWM, which is based on the same philosophy, but yields a unique solution.  相似文献   
172.
In chemical and microbial risk assessments, risk assessors fit dose‐response models to high‐dose data and extrapolate downward to risk levels in the range of 1–10%. Although multiple dose‐response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose‐response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA.  相似文献   
173.
There is currently much discussion about lasso-type regularized regression which is a useful tool for simultaneous estimation and variable selection. Although the lasso-type regularization has several advantages in regression modelling, owing to its sparsity, it suffers from outliers because of using penalized least-squares methods. To overcome this issue, we propose a robust lasso-type estimation procedure that uses the robust criteria as the loss function, imposing L1-type penalty called the elastic net. We also introduce to use the efficient bootstrap information criteria for choosing optimal regularization parameters and a constant in outlier detection. Simulation studies and real data analysis are given to examine the efficiency of the proposed robust sparse regression modelling. We observe that our modelling strategy performs well in the presence of outliers.  相似文献   
174.
The Theil, Pietra, Éltetö and Frigyes measures of income inequality associated with the Pareto distribution function are expressed in terms of parameters defining the Pareto distribution. Inference procedures based on the generalized variable method, the large sample method, and the Bayesian method for testing of, and constructing confidence interval for, these measures are discussed. The results of Monte Carlo study are used to compare the performance of the suggested inference procedures from a population characterized by a Pareto distribution.  相似文献   
175.
This paper presents a method for using end-to-end available bandwidth measurements in order to estimate available bandwidth on individual internal links. The basic approach is to use a power transform on the observed end-to-end measurements, model the result as a mixture of spatially correlated exponential random variables, carryout estimation by moment methods, then transform back to the original variables to get estimates and confidence intervals for the expected available bandwidth on each link. Because spatial dependence leads to certain parameter confounding, only upper bounds can be found reliably. Simulations with ns2 show that the method can work well and that the assumptions are approximately valid in the examples.  相似文献   
176.
In this paper, we investigate the selecting performances of a bootstrapped version of the Akaike information criterion for nonlinear self-exciting threshold autoregressive-type data generating processes. Empirical results will be obtained via Monte Carlo simulations. The quality of our method is assessed by comparison with its non-bootstrap counterpart and through a novel procedure based on artificial neural networks.  相似文献   
177.
Interval-valued variables have become very common in data analysis. Up until now, symbolic regression mostly approaches this type of data from an optimization point of view, considering neither the probabilistic aspects of the models nor the nonlinear relationships between the interval response and the interval predictors. In this article, we formulate interval-valued variables as bivariate random vectors and introduce the bivariate symbolic regression model based on the generalized linear models theory which provides much-needed exibility in practice. Important inferential aspects are investigated. Applications to synthetic and real data illustrate the usefulness of the proposed approach.  相似文献   
178.
ABSTRACT

Recently, Risti? and Nadarajah [A new lifetime distribution. J Stat Comput Simul. 2014;84:135–150] introduced the Poisson generated family of distributions and investigated the properties of a special case named the exponentiated-exponential Poisson distribution. In this paper, we study general mathematical properties of the Poisson-X family in the context of the T-X family of distributions pioneered by Alzaatreh et al. [A new method for generating families of continuous distributions. Metron. 2013;71:63–79], which include quantile, shapes of the density and hazard rate functions, asymptotics and Shannon entropy. We obtain a useful linear representation of the family density and explicit expressions for the ordinary and incomplete moments, mean deviations and generating function. One special lifetime model called the Poisson power-Cauchy is defined and some of its properties are investigated. This model can have flexible hazard rate shapes such as increasing, decreasing, bathtub and upside-down bathtub. The method of maximum likelihood is used to estimate the model parameters. We illustrate the flexibility of the new distribution by means of three applications to real life data sets.  相似文献   
179.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   
180.
In this work, we discuss the class of bilinear GARCH (BL-GARCH) models that are capable of capturing simultaneously two key properties of non-linear time series: volatility clustering and leverage effects. It has often been observed that the marginal distributions of such time series have heavy tails; thus we examine the BL-GARCH model in a general setting under some non-normal distributions. We investigate some probabilistic properties of this model and we conduct a Monte Carlo experiment to evaluate the small-sample performance of the maximum likelihood estimation (MLE) methodology for various models. Finally, within-sample estimation properties were studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. The main results suggest that the Student-t BL-GARCH seems highly appropriate to describe the S&P 500 daily returns.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号