首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6286篇
  免费   234篇
  国内免费   116篇
管理学   438篇
民族学   16篇
人才学   1篇
人口学   110篇
丛书文集   265篇
理论方法论   135篇
综合类   1737篇
社会学   328篇
统计学   3606篇
  2024年   15篇
  2023年   101篇
  2022年   131篇
  2021年   149篇
  2020年   183篇
  2019年   272篇
  2018年   318篇
  2017年   388篇
  2016年   276篇
  2015年   233篇
  2014年   316篇
  2013年   1065篇
  2012年   407篇
  2011年   269篇
  2010年   229篇
  2009年   225篇
  2008年   246篇
  2007年   256篇
  2006年   225篇
  2005年   227篇
  2004年   194篇
  2003年   156篇
  2002年   127篇
  2001年   128篇
  2000年   107篇
  1999年   75篇
  1998年   66篇
  1997年   47篇
  1996年   31篇
  1995年   35篇
  1994年   30篇
  1993年   17篇
  1992年   23篇
  1991年   11篇
  1990年   13篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6636条查询结果,搜索用时 0 毫秒
61.
The knowledge of the urban air quality represents the first step to face air pollution issues. For the last decades many cities can rely on a network of monitoring stations recording concentration values for the main pollutants. This paper focuses on functional principal component analysis (FPCA) to investigate multiple pollutant datasets measured over time at multiple sites within a given urban area. Our purpose is to extend what has been proposed in the literature to data that are multisite and multivariate at the same time. The approach results to be effective to highlight some relevant statistical features of the time series, giving the opportunity to identify significant pollutants and to know the evolution of their variability along time. The paper also deals with missing value issue. As it is known, very long gap sequences can often occur in air quality datasets, due to long time failures not easily solvable or to data coming from a mobile monitoring station. In the considered dataset, large and continuous gaps are imputed by empirical orthogonal function procedure, after denoising raw data by functional data analysis and before performing FPCA, in order to further improve the reconstruction.  相似文献   
62.
A pivotal characteristic of credit defaults that is ignored by most credit scoring models is the rarity of the event. The most widely used model to estimate the probability of default is the logistic regression model. Since the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks, for example, underestimation of the default probability, which could be very risky for banks. In order to overcome these drawbacks, we propose the generalized extreme value regression model. In particular, in a generalized linear model (GLM) with the binary-dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure used is the maximum-likelihood method. This model accommodates skewness and it presents a generalisation of GLMs with complementary log–log link function. We analyse its performance by simulation studies. Finally, we apply the proposed model to empirical data on Italian small and medium enterprises.  相似文献   
63.
The problem of multivariate regression modelling in the presence of heterogeneous data is dealt to address the relevant issue of the influence of such heterogeneity in assessing the linear relations between responses and explanatory variables. In spite of its popularity, clusterwise regression is not designed to identify the linear relationships within ‘homogeneous’ clusters exhibiting internal cohesion and external separation. A within-clusterwise regression is introduced to achieve this aim and, since the possible presence of a linear relation ‘between’ clusters should be also taken into account, a general regression model is introduced to account for both the between-cluster and the within-cluster regression variation. Some decompositions of the variance of the responses accounted for are also given, the least-squares estimation of the parameters is derived, together with an appropriate coordinate descent algorithms and the performance of the proposed methodology is evaluated in different datasets.  相似文献   
64.
We present a novel methodology for a comprehensive statistical analysis of approximately periodic biosignal data. There are two main challenges in such analysis: (1) the automatic extraction (segmentation) of cycles from long, cyclostationary biosignals and (2) the subsequent statistical analysis, which in many cases involves the separation of temporal and amplitude variabilities. The proposed framework provides a principled approach for statistical analysis of such signals, which in turn allows for an efficient cycle segmentation algorithm. This is achieved using a convenient representation of functions called the square-root velocity function (SRVF). The segmented cycles, represented by SRVFs, are temporally aligned using the notion of the Karcher mean, which in turn allows for more efficient statistical summaries of signals. We show the strengths of this method through various disease classification experiments. In the case of myocardial infarction detection and localization, we show that our method compares favorably to methods described in the current literature.  相似文献   
65.
This paper presents estimates for the parameters included in the Block and Basu bivariate lifetime distributions in the presence of covariates and cure fraction, applied to analyze survival data when some individuals may never experience the event of interest and two lifetimes are associated with each unit. A Bayesian procedure is used to get point and confidence intervals for the unknown parameters. Posterior summaries of interest are obtained using standard Markov Chain Monte Carlo methods in rjags package for R software. An illustration of the proposed methodology is given for a Diabetic Retinopathy Study data set.  相似文献   
66.
We construct a mixture distribution including infant, exogenous and Gompertzian/non-Gompertzian senescent mortality. Using mortality data from Swedish females 1751–, we show that this outperforms models without these features, and compare its trends in cohort and period mortality over time. We find an almost complete disappearance of exogenous mortality within the last century of period mortality, with cohort mortality approaching the same limits. Both Gompertzian and non-Gompertzian senescent mortality are consistently present, with the estimated balance between them oscillating constantly. While the parameters of the latter appear to be trending over time, the parameters of the former do not.  相似文献   
67.
The mixture distribution models are more useful than pure distributions in modeling of heterogeneous data sets. The aim of this paper is to propose mixture of Weibull–Poisson (WP) distributions to model heterogeneous data sets for the first time. So, a powerful alternative mixture distribution is created for modeling of the heterogeneous data sets. In the study, many features of the proposed mixture of WP distributions are examined. Also, the expectation maximization (EM) algorithm is used to determine the maximum-likelihood estimates of the parameters, and the simulation study is conducted for evaluating the performance of the proposed EM scheme. Applications for two real heterogeneous data sets are given to show the flexibility and potentiality of the new mixture distribution.  相似文献   
68.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
69.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
70.
News     
U. S. National Income Series Revised—Congress Votes No on Censuses of Business and Manufactures—Britain Revises Living Cost Index-U. S. and U. K. Surveys Uncover Lacks in Statistical Training-Forthcoming Statistical Conferences  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号