首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10401篇
  免费   0篇
管理学   1500篇
民族学   99篇
人口学   2408篇
理论方法论   481篇
综合类   287篇
社会学   4442篇
统计学   1184篇
  2018年   1657篇
  2017年   1650篇
  2016年   1073篇
  2015年   33篇
  2014年   33篇
  2013年   31篇
  2012年   320篇
  2011年   1144篇
  2010年   1043篇
  2009年   782篇
  2008年   817篇
  2007年   995篇
  2005年   224篇
  2004年   252篇
  2003年   210篇
  2002年   81篇
  2001年   5篇
  2000年   10篇
  1999年   5篇
  1996年   28篇
  1988年   8篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
891.
The current study attempts a simultaneous testing of economic models, the gender display perspective, and gender-deviance neutralization hypothesis that attempt to explain present housework arrangements between men and women. The study uses fixed effects models that can produce more robust coefficients than the standard regression models generally used in cross-sectional designs. The findings in the study reveal the inadequacy of economic models and the gender display theory to account for men’s housework behavior. The study introduces the marital contract hypothesis as an alternative theoretical framework for explaining men’s housework behavior. According to the study, what is crucial for achieving housework parity is changes in women’s gender related attitudes and their economic and labor market standing and orientation to paid work. The study suggests that attempting to change men’s gender beliefs can do little to achieve the goal of housework parity.  相似文献   
892.
Robust variable selection with application to quality of life research   总被引:1,自引:0,他引:1  
A large database containing socioeconomic data from 60 communities in Austria and Germany has been built, stemming from 18,000 citizens’ responses to a survey, together with data from official statistical institutes about these communities. This paper describes a procedure for extracting a small set of explanatory variables to explain response variables such as the cognition of quality of life. For better interpretability, the set of explanatory variables needs to be very small and the dependencies among the selected variables need to be low. Due to possible inhomogeneities within the data set, it is further required that the solution is robust to outliers and deviating points. In order to achieve these goals, a robust model selection method, combined with a strategy to reduce the number of selected predictor variables to a necessary minimum, is developed. In addition, this context-sensitive method is applied to obtain responsible factors describing quality of life in communities.  相似文献   
893.
We introduce a general Monte Carlo method based on Nested Sampling (NS), for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ∼e −1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. We illustrate the new method on a test problem and find that it can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.  相似文献   
894.
Many exploratory experiments such as DNA microarray or brain imaging require simultaneously comparisons of hundreds or thousands of hypotheses. Under such a setting, using the false discovery rate (FDR) as an overall Type I error is recommended (Benjamini and Hochberg in J. R. Stat. Soc. B 57:289–300, 1995). Many FDR controlling procedures have been proposed. However, when evaluating the performance of FDR-controlling procedures, researchers are often focused on the ability of procedures to control the FDR and to achieve high power. Meanwhile, under the multiple hypotheses, it may be also likely to commit a false non-discovery or fail to claim a true non-significance. In addition, various experimental parameters such as the number of hypotheses, the proportion of the number of true null hypotheses to the number of hypotheses, the samples size and the correlation structure may affect the performance of FDR controlling procedures. The purpose of this paper is to illustrate the performance of some existing FDR controlling procedures in terms of four indices, i.e., the FDR, the false non-discovery rate, the sensitivity and the specificity. Analytical results of these indices for the FDR controlling procedures are derived. Simulations are also performed to evaluate the performance of controlling procedures in terms of these indices under various experimental parameters. The result can be used to summarize as a guidance for practitioners to properly choose a FDR controlling procedure.  相似文献   
895.
Suppose [^(q)]{\widehat{\theta}} is an estimator of θ in \mathbbR{\mathbb{R}} that satisfies the central limit theorem. In general, inferences on θ are based on the central limit approximation. These have error O(n −1/2), where n is the sample size. Many unsuccessful attempts have been made at finding transformations which reduce this error to O(n −1). The variance stabilizing transformation fails to achieve this. We give alternative transformations that have bias O(n −2), and skewness O(n −3). Examples include the binomial, Poisson, chi-square and hypergeometric distributions.  相似文献   
896.
Two-sample comparison problems are often encountered in practical projects and have widely been studied in literature. Owing to practical demands, the research for this topic under special settings such as a semiparametric framework have also attracted great attentions. Zhou and Liang (Biometrika 92:271–282, 2005) proposed an empirical likelihood-based semi-parametric inference for the comparison of treatment effects in a two-sample problem with censored data. However, their approach is actually a pseudo-empirical likelihood and the method may not be fully efficient. In this study, we develop a new empirical likelihood-based inference under more general framework by using the hazard formulation of censored data for two sample semi-parametric hybrid models. We demonstrate that our empirical likelihood statistic converges to a standard chi-squared distribution under the null hypothesis. We further illustrate the use of the proposed test by testing the ROC curve with censored data, among others. Numerical performance of the proposed method is also examined.  相似文献   
897.
We construct and investigate robust nonparametric tests for the two-sample location problem. A test based on a suitable scaling of the median of the set of differences between the two samples, which is the Hodges-Lehmann shift estimator corresponding to the Wilcoxon two-sample rank test, leads to higher robustness against outliers than the Wilcoxon test itself, while preserving its efficiency under a broad range of distributions. The good performance of the constructed test is investigated under different distributions and outlier configurations and compared to alternatives like the two-sample t-, the Wilcoxon and the median test, as well as to tests based on the difference of the sample medians or the one-sample Hodges-Lehmann estimators.  相似文献   
898.
The prevalence of interval censored data is increasing in medical studies due to the growing use of biomarkers to define a disease progression endpoint. Interval censoring results from periodic monitoring of the progression status. For example, disease progression is established in the interval between the clinic visit where progression is recorded and the prior clinic visit where there was no evidence of disease progression. A methodology is proposed for estimation and inference on the regression coefficients in the Cox proportional hazards model with interval censored data. The methodology is based on estimating equations and uses an inverse probability weight to select event time pairs where the ordering is unambiguous. Simulations are performed to examine the finite sample properties of the estimate and a colon cancer data set is used to demonstrate its performance relative to the conventional partial likelihood estimate that ignores the interval censoring.  相似文献   
899.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   
900.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号