首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5038篇
  免费   74篇
  国内免费   17篇
管理学   227篇
民族学   5篇
人口学   79篇
丛书文集   50篇
理论方法论   25篇
综合类   574篇
社会学   55篇
统计学   4114篇
  2024年   12篇
  2023年   33篇
  2022年   45篇
  2021年   33篇
  2020年   99篇
  2019年   167篇
  2018年   193篇
  2017年   316篇
  2016年   153篇
  2015年   93篇
  2014年   146篇
  2013年   1355篇
  2012年   650篇
  2011年   124篇
  2010年   141篇
  2009年   163篇
  2008年   159篇
  2007年   111篇
  2006年   109篇
  2005年   109篇
  2004年   89篇
  2003年   75篇
  2002年   85篇
  2001年   83篇
  2000年   74篇
  1999年   74篇
  1998年   68篇
  1997年   50篇
  1996年   30篇
  1995年   46篇
  1994年   39篇
  1993年   24篇
  1992年   30篇
  1991年   14篇
  1990年   15篇
  1989年   12篇
  1988年   18篇
  1987年   11篇
  1986年   9篇
  1985年   6篇
  1984年   13篇
  1983年   16篇
  1982年   9篇
  1981年   7篇
  1980年   2篇
  1979年   6篇
  1978年   5篇
  1977年   3篇
  1975年   2篇
  1973年   1篇
排序方式: 共有5129条查询结果,搜索用时 15 毫秒
941.
  总被引:1,自引:0,他引:1  
Flood events can be caused by several different meteorological circumstances. For example, heavy rain events often lead to short flood events with high peaks, whereas snowmelt normally results in events of very long duration with a high volume. Both event types have to be considered in the design of flood protection systems. Unfortunately, all these different event types are often included in annual maximum series (AMS) leading to inhomogeneous samples. Moreover, certain event types are underrepresented in the AMS. This is especially unsatisfactory if the most extreme events result from such an event type. Therefore, monthly maximum data are used to enlarge the information spectrum on the different event types. Of course, not all events can be included in the flood statistics because not every monthly maximum can be declared as a flood. To take this into account, a mixture Peak-over-threshold model is applied, with thresholds specifying flood events of several types that occur in a season of the year. This model is then extended to cover the seasonal type of the data. The applicability is shown in a German case study, where the impact of the single event types in different parts of a year is evaluated.  相似文献   
942.
    
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
943.
    
In reliability theory, a widely used process to model the phenomena of the cumulative deterioration of a system over time is the standard gamma process (SGP). Based on several restrictions, such as a constant variance-to-mean ratio, this process is not always a suitable choice to describe the deterioration. A way to overcome these restrictions is to use an extended version of the gamma process introduced by Cinlar (1980), which is characterized by shape and scale functions. In this article, the aim is to propose statistical methods to estimate the unknown parameters of parametric forms of the shape and scale functions. We here develop two generalized methods of moments (Hansen 1982 Hansen, L. P. 1982. Large sample properties of generalized method of moments estimators. Econometrica 50 (4):102954.[Crossref], [Web of Science ®] [Google Scholar]), based either on the moments or on the Laplace transform of an extended gamma process. Asymptotic properties are provided and a Wald-type test is derived, which allows to test SGPs against extended ones with a specific parametric shape function. Also, the performance of the proposed estimation methods is illustrated on simulated and real data.  相似文献   
944.
    
Nowadays, many manufacturing and service systems provide products and services to their customers in several consecutive stages of operations, in each of which one or more quality characteristics of interest are monitored. In these environments, the final quality in the last stage not only depends on the quality of the task performed in that stage but also is dependent on the quality of the products and services in intermediate stages as well as the design parameters in each stage. In this paper, a novel methodology based on the posterior preference approach is proposed to robustly optimize these multistage processes. In this methodology, a multi-response surface optimization problem is solved in order to find preferred solutions among different non dominated solutions (NDSs) according to decision maker's preference. In addition, as the intermediate response variables (quality characteristics) may act as covariates in the next stages, a robust multi-response estimation method is applied to extract the relationships between the outputs and inputs of each stage. NDSs are generated by the ?-constraint method. The robust preferred solutions are selected considering some newly defined conformance criteria. The applicability of the proposed approach is illustrated by a numerical example at the end.  相似文献   
945.
    
This article focuses on the conditional density of a scalar response variable given a random variable taking values in a semimetric space. The local linear estimators of the conditional density and its derivative are considered. It is assumed that the observations form a stationary α-mixing sequence. Under some regularity conditions, the joint asymptotic normality of the estimators of the conditional density and its derivative is established. The result confirms the prospect in Rachdi et al. (2014 Rachdi, M., A. Laksaci, J. Demongeot, A. Abdali, and F. Madani. 2014. Theoretical and practical aspects of the quadratic error in the local linear estimation of the conditional density for functional data. Computational Statistics and Data Analysis 73 :5368.[Crossref], [Web of Science ®] [Google Scholar]) and can be applied in time-series analysis to make predictions and build confidence intervals. The finite-sample behavior of the estimator is investigated by simulations as well.  相似文献   
946.
    
We propose model-free measures for Granger causality in mean between random variables. Unlike the existing measures, ours are able to detect and quantify nonlinear causal effects. The new measures are based on nonparametric regressions and defined as logarithmic functions of restricted and unrestricted mean square forecast errors. They are easily and consistently estimated by replacing the unknown mean square forecast errors by their nonparametric kernel estimates. We derive the asymptotic normality of nonparametric estimator of causality measures, which we use to build tests for their statistical significance. We establish the validity of smoothed local bootstrap that one can use in finite sample settings to perform statistical tests. Monte Carlo simulations reveal that the proposed test has good finite sample size and power properties for a variety of data-generating processes and different sample sizes. Finally, the empirical importance of measuring nonlinear causality in mean is also illustrated. We quantify the degree of nonlinear predictability of equity risk premium using variance risk premium. Our empirical results show that the variance risk premium is a very good predictor of risk premium at horizons less than 6 months. We also find that there is a high degree of predictability at the 1-month horizon, that can be attributed to a nonlinear causal effect. Supplementary materials for this article are available online.  相似文献   
947.
    
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
948.
    
Standard generalized linear models (GLMs) consist of three components: random component referring to a distribution of the response variable that belongs to the exponential family; systematic component referring to the linear predictor; and known link function specifying the relationship between the linear predictor and the mean of the distribution function. A flexible extension of the standard GLMs allows an unknown link function. Classical parametric likelihood approach is not applicable due to a large parameter space. To address this issue, sieve maximum likelihood estimation has been developed in literature in which the estimator of the unknown link function is assumed to lie in a sieve space. Various methods of sieves including the B‐spline and P‐spline based methods are introduced. The numerical implementation and theoretical properties of these methods are also discussed. WIREs Comput Stat 2018, 10:e1425. doi: 10.1002/wics.1425 This article is categorized under:
  • Applications of Computational Statistics > Signal and Image Processing and Coding
  • Statistical and Graphical Methods of Data Analysis > Nonparametric Methods
  • Statistical Models > Generalized Linear Models
  • Algorithms and Computational Methods > Maximum Likelihood Methods
  相似文献   
949.
    
《Stat》2018,7(1)
The weighted kernel density estimator is an attractive option for shape‐restricted density estimation, because it is simple, familiar, and potentially applicable to many different shape constraints. Despite this, no reliable software implementation has appeared since the method's original proposal in 2002. We found that serious numerical and practical difficulties arise when attempting to implement the method. We overcame these difficulties and in the process discovered that the weighted method and our own recently proposed method—controlling the shape of a kernel density using an adjustment curve—can be unified in a single computational framework. This article describes our findings and introduces the R package scdensity , which can be used to easily obtain density estimates that are unimodal, bimodal, symmetric, and more. © 2018 The Authors. Stat Published by John Wiley & Sons Ltd  相似文献   
950.
    
Level set trees provide a tool for analyzing multivariate functions. Level set trees are particularly efficient in visualizing and presenting properties related to local maxima and local minima of functions. Level set trees can help statistical inference related to estimating probability density functions and regression functions, and they can be used in cluster analysis and function optimization, among other things. Level set trees open a new way to look at multivariate functions, which makes the detection and analysis of multivariate phenomena feasible, going beyond one‐ and two‐dimensional analysis. This article is categorized under:
  • Statistical and Graphical Methods of Data Analysis > Analysis of High Dimensional Data
  • Statistical and Graphical Methods of Data Analysis > Statistical Graphics and Visualization
  • Statistical and Graphical Methods of Data Analysis > Nonparametric Methods
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号