首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1097篇
  免费   45篇
  国内免费   3篇
管理学   163篇
民族学   2篇
人口学   22篇
丛书文集   26篇
理论方法论   37篇
综合类   212篇
社会学   69篇
统计学   614篇
  2023年   8篇
  2022年   11篇
  2021年   6篇
  2020年   23篇
  2019年   36篇
  2018年   37篇
  2017年   53篇
  2016年   34篇
  2015年   34篇
  2014年   51篇
  2013年   257篇
  2012年   91篇
  2011年   64篇
  2010年   58篇
  2009年   53篇
  2008年   37篇
  2007年   36篇
  2006年   41篇
  2005年   32篇
  2004年   23篇
  2003年   25篇
  2002年   26篇
  2001年   13篇
  2000年   14篇
  1999年   13篇
  1998年   8篇
  1997年   6篇
  1996年   7篇
  1995年   6篇
  1994年   9篇
  1993年   4篇
  1992年   4篇
  1991年   6篇
  1990年   5篇
  1989年   1篇
  1987年   4篇
  1984年   4篇
  1983年   2篇
  1981年   1篇
  1980年   1篇
  1978年   1篇
排序方式: 共有1145条查询结果,搜索用时 468 毫秒
1.
常人所不具有的飞行体验铸就和强化了诗人宁明的敏感善察、内敛理性和达观通透,这些成为其人其作的精神底色,令其诗歌具有独特的美学意义。  相似文献   
2.
Merging information for semiparametric density estimation   总被引:1,自引:0,他引:1  
Summary.  The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed.  相似文献   
3.
Summary. We develop a general methodology for tilting time series data. Attention is focused on a large class of regression problems, where errors are expressed through autoregressive processes. The class has a range of important applications and in the context of our work may be used to illustrate the application of tilting methods to interval estimation in regression, robust statistical inference and estimation subject to constraints. The method can be viewed as 'empirical likelihood with nuisance parameters'.  相似文献   
4.
Summary.  Wavelet shrinkage is an effective nonparametric regression technique, especially when the underlying curve has irregular features such as spikes or discontinuities. The basic idea is simple: take the discrete wavelet transform of data consisting of a signal corrupted by noise; shrink or remove the wavelet coefficients to remove the noise; then invert the discrete wavelet transform to form an estimate of the true underlying curve. Various researchers have proposed increasingly sophisticated methods of doing this by using real-valued wavelets. Complex-valued wavelets exist but are rarely used. We propose two new complex-valued wavelet shrinkage techniques: one based on multiwavelet style shrinkage and the other using Bayesian methods. Extensive simulations show that our methods almost always give significantly more accurate estimates than methods based on real-valued wavelets. Further, our multiwavelet style shrinkage method is both simpler and dramatically faster than its competitors. To understand the excellent performance of this method we present a new risk bound on its hard thresholded coefficients.  相似文献   
5.
Summary.  Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small.  相似文献   
6.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.  相似文献   
7.
在学前教育领域,教师性别倾斜仍然是一个不争的事实。对教师性别比例不平衡的关注导致出现了许多关于在学前教育领域需加大男教师比例的呼吁。但是,有关学前教育领域男教师比例高低的影响的论述一直受实证证据少的限制。采用观察法、访谈法和实物研究等多种方法的对照,有助于获得对这一问题的新资料与新思考。  相似文献   
8.
Gini’s nuclear family   总被引:1,自引:0,他引:1  
The purpose of this paper is to justify the use of the Gini coefficient and two close relatives for summarizing the basic information of inequality in distributions of income. To this end we employ a specific transformation of the Lorenz curve, the scaled conditional mean curve, rather than the Lorenz curve as the basic formal representation of inequality in distributions of income. The scaled conditional mean curve is shown to possess several attractive properties as an alternative interpretation of the information content of the Lorenz curve and furthermore proves to yield essential information on polarization in the population. The paper also provides asymptotic distribution results for the empirical scaled conditional mean curve and the related family of empirical measures of inequality.   相似文献   
9.
Summary. Standard goodness-of-fit tests for a parametric regression model against a series of nonparametric alternatives are based on residuals arising from a fitted model. When a parametric regression model is compared with a nonparametric model, goodness-of-fit testing can be naturally approached by evaluating the likelihood of the parametric model within a nonparametric framework. We employ the empirical likelihood for an α -mixing process to formulate a test statistic that measures the goodness of fit of a parametric regression model. The technique is based on a comparison with kernel smoothing estimators. The empirical likelihood formulation of the test has two attractive features. One is its automatic consideration of the variation that is associated with the nonparametric fit due to empirical likelihood's ability to Studentize internally. The other is that the asymptotic distribution of the test statistic is free of unknown parameters, avoiding plug-in estimation. We apply the test to a discretized diffusion model which has recently been considered in financial market analysis.  相似文献   
10.
In the binary single constraint Knapsack Problem, denoted KP, we are given a knapsack of fixed capacity c and a set of n items. Each item j, j = 1,...,n, has an associated size or weight wj and a profit pj. The goal is to determine whether or not item j, j = 1,...,n, should be included in the knapsack. The objective is to maximize the total profit without exceeding the capacity c of the knapsack. In this paper, we study the sensitivity of the optimum of the KP to perturbations of either the profit or the weight of an item. We give approximate and exact interval limits for both cases (profit and weight) and propose several polynomial time algorithms able to reach these interval limits. The performance of the proposed algorithms are evaluated on a large number of problem instances.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号