首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1685篇
  免费   36篇
  国内免费   2篇
管理学   103篇
民族学   2篇
人口学   17篇
丛书文集   50篇
理论方法论   17篇
综合类   686篇
社会学   24篇
统计学   824篇
  2023年   6篇
  2022年   2篇
  2021年   10篇
  2020年   20篇
  2019年   34篇
  2018年   45篇
  2017年   79篇
  2016年   38篇
  2015年   41篇
  2014年   61篇
  2013年   371篇
  2012年   101篇
  2011年   57篇
  2010年   47篇
  2009年   54篇
  2008年   66篇
  2007年   60篇
  2006年   51篇
  2005年   53篇
  2004年   44篇
  2003年   34篇
  2002年   24篇
  2001年   32篇
  2000年   44篇
  1999年   52篇
  1998年   33篇
  1997年   32篇
  1996年   45篇
  1995年   36篇
  1994年   31篇
  1993年   12篇
  1992年   25篇
  1991年   5篇
  1990年   18篇
  1989年   9篇
  1988年   14篇
  1987年   1篇
  1985年   8篇
  1984年   7篇
  1983年   5篇
  1982年   5篇
  1981年   2篇
  1980年   1篇
  1979年   3篇
  1978年   1篇
  1977年   1篇
  1975年   3篇
排序方式: 共有1723条查询结果,搜索用时 15 毫秒
51.
We propose a family of goodness-of-fit tests for copulas. The tests use generalizations of the information matrix (IM) equality of White and so relate to the copula test proposed by Huang and Prokhorov. The idea is that eigenspectrum-based statements of the IM equality reduce the degrees of freedom of the test’s asymptotic distribution and lead to better size-power properties, even in high dimensions. The gains are especially pronounced for vine copulas, where additional benefits come from simplifications of score functions and the Hessian. We derive the asymptotic distribution of the generalized tests, accounting for the nonparametric estimation of the marginals and apply a parametric bootstrap procedure, valid when asymptotic critical values are inaccurate. In Monte Carlo simulations, we study the behavior of the new tests, compare them with several Cramer–von Mises type tests and confirm the desired properties of the new tests in high dimensions.  相似文献   
52.
?iray et al. proposed a restricted Liu estimator to overcome multicollinearity in the logistic regression model. They also used a Monte Carlo simulation to study the properties of the restricted Liu estimator. However, they did not present the theoretical result about the mean squared error properties of the restricted estimator compared to MLE, restricted maximum likelihood estimator (RMLE) and Liu estimator. In this article, we compare the restricted Liu estimator with MLE, RMLE and Liu estimator in the mean squared error sense and we also present a method to choose a biasing parameter. Finally, a real data example and a Monte Carlo simulation are conducted to illustrate the benefits of the restricted Liu estimator.  相似文献   
53.
In this study, we investigate linear regression having both heteroskedasticity and collinearity problems. We discuss the properties related to the perturbation method. Important observations are summarized as theorems. We then prove the main result that states the heteroskedasticity-robust variances can be improved and that the resulting bias is minimized by using the matrix perturbation method. We analyze a practical example for validation of the method.  相似文献   
54.
Robust parameter designs (RPDs) enable the experimenter to discover how to modify the design of the product to minimize the effect due to variation from noise sources. The aim of this article is to show how this amount of work can be reduced under modified central composite design (MCCD). We propose a measure of extended scaled prediction variance (ESPV) for evaluation of RPDs on MCCD. Using these measures, we show that we can check the error or bias associated with estimating the model parameters and suggest the values of α recommended for MCCS under minimum ESPV.  相似文献   
55.
Analysis of massive datasets is challenging owing to limitations of computer primary memory. Composite quantile regression (CQR) is a robust and efficient estimation method. In this paper, we extend CQR to massive datasets and propose a divide-and-conquer CQR method. The basic idea is to split the entire dataset into several blocks, applying the CQR method for data in each block, and finally combining these regression results via weighted average. The proposed approach significantly reduces the required amount of primary memory, and the resulting estimate will be as efficient as if the entire data set is analysed simultaneously. Moreover, to improve the efficiency of CQR, we propose a weighted CQR estimation approach. To achieve sparsity with high-dimensional covariates, we develop a variable selection procedure to select significant parametric components and prove the method possessing the oracle property. Both simulations and data analysis are conducted to illustrate the finite sample performance of the proposed methods.  相似文献   
56.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
57.
In this paper, locally D-optimal saturated designs for a logistic model with one and two continuous input variables have been constructed by modifying the famous Fedorov exchange algorithm. A saturated design not only ensures the minimum number of runs in the design but also simplifies the row exchange computation. The basic idea is to exchange a design point with a point from the design space. The algorithm performs the best row exchange between design points and points form a candidate set representing the design space. Naturally, the resultant designs depend on the candidate set. For gain in precision, intuitively a candidate set with a larger number of points and the low discrepancy is desirable, but it increases the computational cost. Apart from the modification in row exchange computation, we propose implementing the algorithm in two stages. Initially, construct a design with a candidate set of affordable size and then later generate a new candidate set around the points of design searched in the former stage. In order to validate the optimality of constructed designs, we have used the general equivalence theorem. Algorithms for the construction of optimal designs have been implemented by developing suitable codes in R.  相似文献   
58.
59.
We introduce a general class of continuous univariate distributions with positive support obtained by transforming the class of two-piece distributions. We show that this class of distributions is very flexible, easy to implement, and contains members that can capture different tail behaviours and shapes, producing also a variety of hazard functions. The proposed distributions represent a flexible alternative to the classical choices such as the log-normal, Gamma, and Weibull distributions. We investigate empirically the inferential properties of the proposed models through an extensive simulation study. We present some applications using real data in the contexts of time-to-event and accelerated failure time models. In the second kind of applications, we explore the use of these models in the estimation of the distribution of the individual remaining life.  相似文献   
60.
In the 2006 French housing survey, information is collected on many aspects of housing to describe the housing stock in France and the housing conditions of French households. The basic national sample results from a multistage sampling design. Complementary samples were selected to perform accurate estimations for socio-demographic domains. Some French regions proceeded to a regional and local extension of the national sample. The variance is estimated for a region with a regional and local extension of the basic national sample.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号