首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   299篇
  免费   14篇
  国内免费   1篇
管理学   12篇
人口学   5篇
丛书文集   7篇
理论方法论   8篇
综合类   164篇
社会学   3篇
统计学   115篇
  2023年   3篇
  2021年   1篇
  2020年   4篇
  2019年   5篇
  2018年   12篇
  2017年   11篇
  2016年   13篇
  2015年   12篇
  2014年   18篇
  2013年   40篇
  2012年   24篇
  2011年   17篇
  2010年   15篇
  2009年   14篇
  2008年   13篇
  2007年   20篇
  2006年   15篇
  2005年   6篇
  2004年   9篇
  2003年   6篇
  2002年   5篇
  2001年   7篇
  2000年   7篇
  1999年   9篇
  1998年   4篇
  1997年   4篇
  1996年   4篇
  1995年   2篇
  1994年   3篇
  1993年   1篇
  1992年   4篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1985年   2篇
  1979年   1篇
排序方式: 共有314条查询结果,搜索用时 0 毫秒
1.
指数因素分析法是一种重要的统计分析方法,而其分析结果的正确与否取决于所用的指数体系。利用传统的综合指数体系从事因素分析虽方便,使用,但也存在一些不足。本文提出用其他指数体系进行因素分析的思想,以弥补传统综合指数体系在因素分析中的缺陷。  相似文献   
2.
结合具体工程实例,对静载试验极差大于平均值30%的复合地基,如何确定承载力标准值进行探讨,提出较为合理的取值方法.  相似文献   
3.
加权复合分位数回归方法在动态VaR风险度量中的应用   总被引:1,自引:0,他引:1  
风险价值(VaR)因为简单直观,成为了当今国际上最主流的风险度量方法之一,而基于时间序列自回归(AR)模型来计算无条件风险度量值在实业界有广泛应用。本文基于分位数回归理论对AR模型提出了一个估计方法--加权复合分位数回归(WCQR)估计,该方法可以充分利用多个分位数信息提高参数估计的效率,并且对于不同的分位数回归赋予不同的权重,使得估计更加有效,文中给出了该估计的渐近正态性质。有限样本的数值模拟表明,当残差服从非正态分布时,WCQR估计的的统计性质接近于极大似然估计,而该估计是不需要知道残差分布的,因此,所提出的WCQR估计更加具有竞争力。此方法在预测资产收益的VaR动态风险时有较好的应用,我们将所提出的理论分析了我国九只封闭式基金,实证分析发现,结合WCQR方法求得的VaR风险与用非参数方法求得的VaR风险非常接近,而结合WCQR方法可以计算动态的VaR风险值和预测资产收益的VaR风险值。  相似文献   
4.
《Risk analysis》2018,38(1):194-209
This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two‐sample t ‐tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM.  相似文献   
5.
This paper is dedicated to the study of the composite quantile regression (CQR) estimations of time-varying parameter vectors for multidimensional diffusion models. Based on the local linear fitting for parameter vectors, we propose the local linear CQR estimations of the drift parameter vectors, and verify their asymptotic biases, asymptotic variances and asymptotic normality. Moreover, we discuss the asymptotic relative efficiency (ARE) of the local linear CQR estimations with respect to the local linear least-squares estimations. We obtain that the local estimations that we proposed are much more efficient than the local linear least-squares estimations. Simulation studies are constructed to show the performance of the estimations proposed.  相似文献   
6.
When a spatial point process model is fitted to spatial point pattern data using standard software, the parameter estimates are typically biased. Contrary to folklore, the bias does not reflect weaknesses of the underlying mathematical methods, but is mainly due to the effects of discretization of the spatial domain. We investigate two approaches to correcting the bias: a Newton–Raphson-type correction and Richardson extrapolation. In simulation experiments, Richardson extrapolation performs best.  相似文献   
7.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods.  相似文献   
8.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
9.
We introduce a general class of continuous univariate distributions with positive support obtained by transforming the class of two-piece distributions. We show that this class of distributions is very flexible, easy to implement, and contains members that can capture different tail behaviours and shapes, producing also a variety of hazard functions. The proposed distributions represent a flexible alternative to the classical choices such as the log-normal, Gamma, and Weibull distributions. We investigate empirically the inferential properties of the proposed models through an extensive simulation study. We present some applications using real data in the contexts of time-to-event and accelerated failure time models. In the second kind of applications, we explore the use of these models in the estimation of the distribution of the individual remaining life.  相似文献   
10.
We propose a semiparametric estimator for single‐index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non‐parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy‐tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号