首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1383篇
  免费   25篇
管理学   203篇
民族学   7篇
人才学   1篇
人口学   89篇
丛书文集   4篇
理论方法论   127篇
综合类   18篇
社会学   700篇
统计学   259篇
  2023年   9篇
  2022年   5篇
  2021年   6篇
  2020年   16篇
  2019年   24篇
  2018年   23篇
  2017年   34篇
  2016年   31篇
  2015年   34篇
  2014年   34篇
  2013年   221篇
  2012年   33篇
  2011年   41篇
  2010年   31篇
  2009年   30篇
  2008年   43篇
  2007年   44篇
  2006年   44篇
  2005年   49篇
  2004年   35篇
  2003年   39篇
  2002年   45篇
  2001年   35篇
  2000年   41篇
  1999年   35篇
  1998年   28篇
  1997年   28篇
  1996年   23篇
  1995年   30篇
  1994年   23篇
  1993年   17篇
  1992年   29篇
  1991年   16篇
  1990年   23篇
  1989年   14篇
  1988年   11篇
  1987年   14篇
  1986年   15篇
  1985年   15篇
  1984年   13篇
  1983年   20篇
  1982年   11篇
  1981年   16篇
  1980年   10篇
  1979年   10篇
  1978年   16篇
  1977年   11篇
  1976年   5篇
  1974年   4篇
  1973年   6篇
排序方式: 共有1408条查询结果,搜索用时 15 毫秒
61.
Has the national minimum wage reduced UK wage inequality?   总被引:1,自引:0,他引:1  
Summary.  The paper investigates the effect on the wage distribution of the introduction, in April 1999, of the national minimum wage (NMW) in the UK. Because of the structure of UK earnings statistics, it is not straightforward to investigate this and various methods for adjusting the published statistics are discussed. The main conclusions are that the NMW does have a detectable effect on the wage distribution and that compliance with the NMW is widespread but the effect is limited because the NMW has been set at a level such that only 6–7% of workers are directly affected and the NMW has had virtually no effect on the pay of workers who are not directly affected. Furthermore, virtually all the changes occurred within 2 months of the introduction in April 1999 and its effect declined over time from April 1999 to September 2001 as the minimum wage was not uprated in line with the increase in average earnings. The more substantial increase in the NMW in October 2001 partially, but not wholly, restored some of this decline in impact.  相似文献   
62.
Statistical agencies have conflicting obligations to protect confidential information provided by respondents to surveys or censuses and to make data available for research and planning activities. When the microdata themselves are to be released, in order to achieve these conflicting objectives, statistical agencies apply statistical disclosure limitation (SDL) methods to the data, such as noise addition, swapping or microaggregation. Some of these methods do not preserve important structure and constraints in the data, such as positivity of some attributes or inequality constraints between attributes. Failure to preserve constraints is not only problematic in terms of data utility, but also may increase disclosure risk.In this paper, we describe a method for SDL that preserves both positivity of attributes and the mean vector and covariance matrix of the original data. The basis of the method is to apply multiplicative noise with the proper, data-dependent covariance structure.  相似文献   
63.
Least-squares and quantile regressions are method of moments techniques that are typically used in isolation. A leading example where efficiency may be gained by combining least-squares and quantile regressions is one where some information on the error quantiles is available but the error distribution cannot be fully specified. This estimation problem may be cast in terms of solving an over-determined estimating equation (EE) system for which the generalized method of moments (GMM) and empirical likelihood (EL) are approaches of recognized importance. The major difficulty with implementing these techniques here is that the EEs associated with the quantiles are non-differentiable. In this paper, we develop a kernel-based smoothing technique for non-smooth EEs, and derive the asymptotic properties of the GMM and maximum smoothed EL (MSEL) estimators based on the smoothed EEs. Via a simulation study, we investigate the finite sample properties of the GMM and MSEL estimators that combine least-squares and quantile moment relationships. Applications to real datasets are also considered.  相似文献   
64.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   
65.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   
66.
The Gibbs sampler has been proposed as a general method for Bayesian calculation in Gelfand and Smith (1990). However, the predominance of experience to date resides in applications assuming conjugacy where implementation is reasonably straightforward. This paper describes a tailored approximate rejection method approach for implementation of the Gibbs sampler when nonconjugate structure is present. Several challenging applications are presented for illustration.  相似文献   
67.
Mass spectrometry-based proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. Though recent years have seen a tremendous improvement in instrument performance and the computational tools used, significant challenges remain, and there are many opportunities for statisticians to make important contributions. In the most widely used "bottom-up" approach to proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and analyzed using a mass spectrometer. The two fundamental challenges in the analysis of bottom-up MS-based proteomics are: (1) Identifying the proteins that are present in a sample, and (2) Quantifying the abundance levels of the identified proteins. Both of these challenges require knowledge of the biological and technological context that gives rise to observed data, as well as the application of sound statistical principles for estimation and inference. We present an overview of bottom-up proteomics and outline the key statistical issues that arise in protein identification and quantification.  相似文献   
68.
When preparing data for public release, information organizations face the challenge of preserving the quality of data while protecting the confidentiality of both data subjects and sensitive data attributes. Without knowing what type of analyses will be conducted by data users, it is often hard to alter data without sacrificing data utility. In this paper, we propose a new approach to mitigate this difficulty, which entails using Bayesian additive regression trees (BART), in connection with existing methods for statistical disclosure limitation, to help preserve data utility while meeting confidentiality requirements. We illustrate the performance of our method through both simulation and a data example. The method works well when the targeted relationship underlying the original data is not weak, and the performance appears to be robust to the intensity of alteration.  相似文献   
69.
70.
An auxiliary variable method based on a slice sampler is shown to provide an attractive simulation-based model fitting strategy for fitting Bayesian models under proper priors. Though broadly applicable, we illustrate in the context of fitting spatial models for geo-referenced or point source data. Spatial modeling within a Bayesian framework offers inferential advantages and the slice sampler provides an algorithm which is essentially off the shelf. Further potential advantages over importance sampling approaches and Metropolis approaches are noted and illustrative examples are supplied.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号