首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3067篇
  免费   71篇
  国内免费   5篇
管理学   81篇
民族学   1篇
人口学   11篇
丛书文集   15篇
理论方法论   13篇
综合类   177篇
社会学   9篇
统计学   2836篇
  2023年   13篇
  2022年   10篇
  2021年   21篇
  2020年   56篇
  2019年   94篇
  2018年   131篇
  2017年   205篇
  2016年   86篇
  2015年   75篇
  2014年   103篇
  2013年   1149篇
  2012年   252篇
  2011年   68篇
  2010年   78篇
  2009年   83篇
  2008年   74篇
  2007年   58篇
  2006年   47篇
  2005年   66篇
  2004年   55篇
  2003年   48篇
  2002年   46篇
  2001年   41篇
  2000年   30篇
  1999年   41篇
  1998年   47篇
  1997年   26篇
  1996年   13篇
  1995年   12篇
  1994年   6篇
  1993年   9篇
  1992年   7篇
  1991年   7篇
  1990年   10篇
  1989年   7篇
  1988年   10篇
  1987年   6篇
  1986年   3篇
  1985年   10篇
  1984年   8篇
  1983年   11篇
  1982年   5篇
  1981年   1篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
排序方式: 共有3143条查询结果,搜索用时 78 毫秒
41.
A ship that is not under control (NUC) is a typical incident that poses serious problems when in confined waters close to shore. The emergency response to NUC ships is to select the best risk control options, which is a challenge in restricted conditions (e.g., time limitation, resource constraint, and information asymmetry), particularly in inland waterway transportation. To enable a quick and effective response, this article develops a three‐stage decision‐making framework for NUC ship handling. The core of this method is (1) to propose feasible options for each involved entity (e.g., maritime safety administration, NUC ship, and ships passing by) under resource constraint in the first stage, (2) to select the most feasible options by comparing the similarity of the new case and existing cases in the second stage, and (3) to make decisions considering the cooperation between the involved organizations by using a developed Bayesian network in the third stage. Consequently, this work provides a useful tool to achieve well‐organized management of NUC ships.  相似文献   
42.
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao-Cramer lower bound) in the limit of small noise.  相似文献   
43.
Outlier detection algorithms are intimately connected with robust statistics that down‐weight some observations to zero. We define a number of outlier detection algorithms related to the Huber‐skip and least trimmed squares estimators, including the one‐step Huber‐skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.  相似文献   
44.
Simulations of forest inventory in several populations compared simple random with “quick probability proportional to size” (QPPS) sampling. The latter may be applied in the absence of a list sampling frame and/or prior measurement of the auxiliary variable. The correlation between the auxiliary and target variables required to render QPPS sampling more efficient than simple random sampling varied over the range 0.3–0.6 and was lower when sampling from populations that were skewed to the right. Two possible analytical estimators of the standard error of the estimate of the mean for QPPS sampling were found to be less reliable than bootstrapping.  相似文献   
45.
This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter.  相似文献   
46.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
47.
The primary objective of a multi-regional clinical trial is to investigate the overall efficacy of the drug across regions and evaluate the possibility of applying the overall trial result to some specific region. A challenge arises when there is not enough regional sample size. We focus on the problem of evaluating applicability of a drug to a specific region of interest under the criterion of preserving a certain proportion of the overall treatment effect in the region. We propose a variant of James-Stein shrinkage estimator in the empirical Bayes context for the region-specific treatment effect. The estimator has the features of accommodating the between-region variation and finiteness correction of bias. We also propose a truncated version of the proposed shrinkage estimator to further protect risk in the presence of extreme value of regional treatment effect. Based on the proposed estimator, we provide the consistency assessment criterion and sample size calculation for the region of interest. Simulations are conducted to demonstrate the performance of the proposed estimators in comparison with some existing methods. A hypothetical example is presented to illustrate the application of the proposed method.  相似文献   
48.
We apply the Abramson principle to define adaptive kernel estimators for the intensity function of a spatial point process. We derive asymptotic expansions for the bias and variance under the regime that n independent copies of a simple point process in Euclidean space are superposed. The method is illustrated by means of a simple example and applied to tornado data.  相似文献   
49.
In this article, the least squares (LS) estimates of the parameters of periodic autoregressive (PAR) models are investigated for various distributions of error terms via Monte-Carlo simulation. Beside the Gaussian distribution, this study covers the exponential, gamma, student-t, and Cauchy distributions. The estimates are compared for various distributions via bias and MSE criterion. The effect of other factors are also examined as the non-constancy of model orders, the non-constancy of the variances of seasonal white noise, the period length, and the length of the time series. The simulation results indicate that this method is in general robust for the estimation of AR parameters with respect to the distribution of error terms and other factors. However, the estimates of those parameters were, in some cases, noticeably poor for Cauchy distribution. It is also noticed that the variances of estimates of white noise variances are highly affected by the degree of skewness of the distribution of error terms.  相似文献   
50.
The two parametric distribution functions appearing in the extreme-value theory – the generalized extreme-value distribution and the generalized Pareto distribution – have log-concave densities if the extreme-value index γ∈[?1, 0]. Replacing the order statistics in tail-index estimators by their corresponding quantiles from the distribution function that is based on the estimated log-concave density ? f n leads to novel smooth quantile and tail-index estimators. These new estimators aim at estimating the tail index especially in small samples. Acting as a smoother of the empirical distribution function, the log-concave distribution function estimator reduces estimation variability to a much greater extent than it introduces bias. As a consequence, Monte Carlo simulations demonstrate that the smoothed version of the estimators are well superior to their non-smoothed counterparts, in terms of mean-squared error.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号