首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5981篇
  免费   316篇
  国内免费   91篇
管理学   386篇
民族学   18篇
人才学   1篇
人口学   112篇
丛书文集   279篇
理论方法论   135篇
综合类   1624篇
社会学   328篇
统计学   3505篇
  2024年   9篇
  2023年   82篇
  2022年   87篇
  2021年   139篇
  2020年   170篇
  2019年   258篇
  2018年   301篇
  2017年   383篇
  2016年   271篇
  2015年   223篇
  2014年   309篇
  2013年   1042篇
  2012年   403篇
  2011年   263篇
  2010年   231篇
  2009年   227篇
  2008年   242篇
  2007年   257篇
  2006年   221篇
  2005年   230篇
  2004年   191篇
  2003年   157篇
  2002年   109篇
  2001年   123篇
  2000年   99篇
  1999年   66篇
  1998年   67篇
  1997年   44篇
  1996年   27篇
  1995年   31篇
  1994年   25篇
  1993年   16篇
  1992年   19篇
  1991年   11篇
  1990年   11篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6388条查询结果,搜索用时 78 毫秒
21.
A test of congruence among distance matrices is described. It tests the hypothesis that several matrices, containing different types of variables about the same objects, are congruent with one another, so they can be used jointly in statistical analysis. Raw data tables are turned into similarity or distance matrices prior to testing; they can then be compared to data that naturally come in the form of distance matrices. The proposed test can be seen as a generalization of the Mantel test of matrix correspondence to any number of distance matrices. This paper shows that the new test has the correct rate of Type I error and good power. Power increases as the number of objects and the number of congruent data matrices increase; power is higher when the total number of matrices in the study is smaller. To illustrate the method, the proposed test is used to test the hypothesis that matrices representing different types of organoleptic variables (colour, nose, body, palate and finish) in single‐malt Scotch whiskies are congruent.  相似文献   
22.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed.  相似文献   
23.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   
24.
统计执法的博弈分析   总被引:1,自引:0,他引:1  
针对目前中国统计数据失真相当严重并引起社会各界普遍关注的现象,运用博弈论作为分析工具,引入重复博弈研究了统计执法中数据报方与查方的利益冲突关系,从统计执法的角度揭示了统计数据失真的主要原因,并提出了相应的五项对策。  相似文献   
25.
基于RSA的电子商务信息加密技术研究   总被引:1,自引:0,他引:1  
21世纪是网络信息时代,电子商务的迅猛发展和普及打破了人们传统的经营和消费理念,网上消费已成为一种新的消费形式,但随之而来的便是电子商务赖以生存和发展的安全问题。文章主要通过对电子商务安全隐患的分析,论证了数据加密技术在电子商务安全中的作用,重点探讨了RSA公钥加密算法,并通过实例对其加密原理、计算复杂性等安全性问题作了详尽的分析和阐述。  相似文献   
26.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   
27.
By approximating the nonparametric component using a regression spline in generalized partial linear models (GPLM), robust generalized estimating equations (GEE), involving bounded score function and leverage-based weighting function, can be used to estimate the regression parameters in GPLM robustly for longitudinal data or clustered data. In this paper, score test statistics are proposed for testing the regression parameters with robustness, and their asymptotic distributions under the null hypothesis and a class of local alternative hypotheses are studied. The proposed score tests reply on the estimation of a smaller model without the testing parameters involved, and perform well in the simulation studies and real data analysis conducted in this paper.  相似文献   
28.
Abstract.  In this paper, we propose a random varying-coefficient model for longitudinal data. This model is different from the standard varying-coefficient model in the sense that the time-varying coefficients are assumed to be subject-specific, and can be considered as realizations of stochastic processes. This modelling strategy allows us to employ powerful mixed-effects modelling techniques to efficiently incorporate the within-subject and between-subject variations in the estimators of time-varying coefficients. Thus, the subject-specific feature of longitudinal data is effectively considered in the proposed model. A backfitting algorithm is proposed to estimate the coefficient functions. Simulation studies show that the proposed estimation methods are more efficient in finite-sample performance compared with the standard local least squares method. An application to an AIDS clinical study is presented to illustrate the proposed methodologies.  相似文献   
29.
Summary.  Social data often contain missing information. The problem is inevitably severe when analysing historical data. Conventionally, researchers analyse complete records only. Listwise deletion not only reduces the effective sample size but also may result in biased estimation, depending on the missingness mechanism. We analyse household types by using population registers from ancient China (618–907 AD) by comparing a simple classification, a latent class model of the complete data and a latent class model of the complete and partially missing data assuming four types of ignorable and non-ignorable missingness mechanisms. The findings show that either a frequency classification or a latent class analysis using the complete records only yielded biased estimates and incorrect conclusions in the presence of partially missing data of a non-ignorable mechanism. Although simply assuming ignorable or non-ignorable missing data produced consistently similarly higher estimates of the proportion of complex households, a specification of the relationship between the latent variable and the degree of missingness by a row effect uniform association model helped to capture the missingness mechanism better and improved the model fit.  相似文献   
30.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号