首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
  示例: 沙坡头地区,人工植被区,变化  检索词用空格隔开表示必须包含全部检索词,用“,”隔开表示只需满足任一检索词即可!
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4590篇
  免费   276篇
  国内免费   41篇
管理学   500篇
劳动科学   2篇
民族学   41篇
人口学   64篇
丛书文集   318篇
理论方法论   118篇
综合类   2196篇
社会学   336篇
统计学   1332篇
  2025年   5篇
  2024年   34篇
  2023年   46篇
  2022年   46篇
  2021年   54篇
  2020年   102篇
  2019年   99篇
  2018年   128篇
  2017年   171篇
  2016年   138篇
  2015年   130篇
  2014年   212篇
  2013年   553篇
  2012年   333篇
  2011年   242篇
  2010年   230篇
  2009年   205篇
  2008年   212篇
  2007年   270篇
  2006年   254篇
  2005年   265篇
  2004年   229篇
  2003年   226篇
  2002年   175篇
  2001年   155篇
  2000年   108篇
  1999年   42篇
  1998年   40篇
  1997年   40篇
  1996年   43篇
  1995年   21篇
  1994年   14篇
  1993年   15篇
  1992年   10篇
  1991年   10篇
  1990年   6篇
  1989年   7篇
  1988年   9篇
  1987年   8篇
  1986年   2篇
  1985年   4篇
  1984年   5篇
  1983年   2篇
  1982年   3篇
  1981年   1篇
  1979年   1篇
  1976年   1篇
  1975年   1篇
排序方式: 共有4907条查询结果,搜索用时 15 毫秒
211.
The Densest k-Subgraph (DkS) problem asks for a k-vertex subgraph of a given graph with the maximum number of edges. The problem is strongly NP-hard, as a generalization of the well known Clique problem and we also know that it does not admit a Polynomial Time Approximation Scheme (PTAS). In this paper we focus on special cases of the problem, with respect to the class of the input graph. Especially, towards the elucidation of the open questions concerning the complexity of the problem for interval graphs as well as its approximability for chordal graphs, we consider graphs having special clique graphs. We present a PTAS for stars of cliques and a dynamic programming algorithm for trees of cliques. M.L. is co-financed within Op. Education by the ESF (European Social Fund) and National Resources. V.Z. is partially supported by the Special Research Grants Account of the University of Athens under Grant 70/4/5821.  相似文献   
212.
Health Risk Assessment of a Modern Municipal Waste Incinerator   总被引:2,自引:0,他引:2  
During the modernization of the municipal waste incinerator (MWI, maximum capacity of 180,000 tons per year) of Metropolitan Grenoble (405,000 inhabitants), in France, a risk assessment was conducted, based on four tracer pollutants: two volatile organic compounds (benzene and 1, 1, 1 trichloroethane) and two heavy metals (nickel and cadmium, measured in particles). A Gaussian plume dispersion model, applied to maximum emissions measured at the MWI stacks, was used to estimate the distribution of these pollutants in the atmosphere throughout the metropolitan area. A random sample telephone survey (570 subjects) gathered data on time-activity patterns, according to demographic characteristics of the population. Life-long exposure was assessed as a time-weighted average of ambient air concentrations. Inhalation alone was considered because, in the Grenoble urban setting, other routes of exposure are not likely. A Monte Carlo simulation was used to describe probability distributions of exposures and risks. The median of the life-long personal exposures distribution to MWI benzene was 3.2·10–5 g/m3 (20th and 80th percentiles = 1.5·10–5 and 6.5·10–5 g/m3), yielding a 2.6·10–10 carcinogenic risk (1.2·10–10–5.4·10–10). For nickel, the corresponding life-time exposure and cancer risk were 1.8·10–4 g/m3 (0.9.10–4 – 3.6·10–4 g/m3) and 8.6·10–8 (4.3·10–8–17.3·10–8); for cadmium they were respectively 8.3·10–6 g/m3 (4.0·10–6–17.6·10–6) and 1.5·10–8 (7.2·10–9–3.1·10–8). Inhalation exposure to cadmium emitted by the MWI represented less than 1% of the WHO Air Quality Guideline (5 ng/m3), while there was a margin of exposure of more than 109 between the NOAEL (150 ppm) and exposure estimates to trichloroethane. Neither dioxins nor mercury, a volatile metal, were measured. This could lessen the attributable life-long risks estimated. The minute (VOCs and cadmium) to moderate (nickel) exposure and risk estimates are in accord with other studies on modern MWIs meeting recent emission regulations, however.  相似文献   
213.
In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of , where N is the input size, for any 0 in polynomial time unless P = NP, even if all the given trees are of height 2. On the other hand, we present an O(N log N)-time heuristic for the restriction of this problem to instances with O(1) trees of height O(1) yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete, and provide a simple, fast heuristic for it yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem.  相似文献   
214.
In some situations, an appropriate quality measure uses three or more discrete levels to classify a product characteristic. For these situations, some control charts have been developed based on statistical criteria regardless of economic considerations. In this paper, we develop economic and economic statistical designs (ESD) for 3-level control charts. We apply the cost model proposed by Costa and Rahim.[Economic design of X charts with variable parameters: the Markov chain approach, J Appl Stat 28 (2001), 875–885] Furthermore, we assume that the length of time that the process remains in control is exponentially distributed which allows us to apply the Markov chain approach for developing the cost model. We apply a genetic algorithm to determine the optimal values of model parameters by minimizing the cost function. A numerical example is provided to illustrate the performance of the proposed models and to compare the cost of the pure economic and ESD for three-level control charts. A sensitivity analysis is also conducted in this numerical example.  相似文献   
215.
216.
217.
Formulating the model first in continuous time, we have developed a state space approach to the problem of testing for threshold-type nonlinearity when the data are irregularly spaced.  相似文献   
218.
This paper considers the likelihood ratio (LR) tests of stationarity, common trends and cointegration for multivariate time series. As the distribution of these tests is not known, a bootstrap version is proposed via a state- space representation. The bootstrap samples are obtained from the Kalman filter innovations under the null hypothesis. Monte Carlo simulations for the Gaussian univariate random walk plus noise model show that the bootstrap LR test achieves higher power for medium-sized deviations from the null hypothesis than a locally optimal and one-sided Lagrange Multiplier (LM) test that has a known asymptotic distribution. The power gains of the bootstrap LR test are significantly larger for testing the hypothesis of common trends and cointegration in multivariate time series, as the alternative asymptotic procedure – obtained as an extension of the LM test of stationarity – does not possess properties of optimality. Finally, it is shown that the (pseudo-)LR tests maintain good size and power properties also for the non-Gaussian series. An empirical illustration is provided.  相似文献   
219.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   
220.
Autoregressive model is a popular method for analysing the time dependent data, where selection of order parameter is imperative. Two commonly used selection criteria are the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), which are known to suffer the potential problems regarding overfit and underfit, respectively. To our knowledge, there does not exist a criterion in the literature that can satisfactorily perform under various situations. Therefore, in this paper, we focus on forecasting the future values of an observed time series and propose an adaptive idea to combine the advantages of AIC and BIC but to mitigate their weaknesses based on the concept of generalized degrees of freedom. Instead of applying a fixed criterion to select the order parameter, we propose an approximately unbiased estimator of mean squared prediction errors based on a data perturbation technique for fairly comparing between AIC and BIC. Then use the selected criterion to determine the final order parameter. Some numerical experiments are performed to show the superiority of the proposed method and a real data set of the retail price index of China from 1952 to 2008 is also applied for illustration.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号